[go: up one dir, main page]

HK1044389A - Method and apparatus for providing real-time call processing services in an intelligent network - Google Patents

Method and apparatus for providing real-time call processing services in an intelligent network Download PDF

Info

Publication number
HK1044389A
HK1044389A HK02105474.2A HK02105474A HK1044389A HK 1044389 A HK1044389 A HK 1044389A HK 02105474 A HK02105474 A HK 02105474A HK 1044389 A HK1044389 A HK 1044389A
Authority
HK
Hong Kong
Prior art keywords
service
instance
node
call
network
Prior art date
Application number
HK02105474.2A
Other languages
Chinese (zh)
Inventor
Deo Ajay
Wong Wendy
Wang Henry
Syed Sami
Original Assignee
Deo Ajay
Wong Wendy
Wang Henry
Syed Sami
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Deo Ajay, Wong Wendy, Wang Henry, Syed Sami filed Critical Deo Ajay
Publication of HK1044389A publication Critical patent/HK1044389A/en

Links

Description

Method and apparatus for providing real-time call processing service in intelligent network
The present invention relates generally to intelligent network systems for providing communication services, and more particularly, to a novel service control system for providing real-time event handling services at each of a plurality of service nodes distributed throughout an intelligent network.
A network service is a function performed by a communication network, such as a data and telephony network, and its associated resources in response to interactions with one or more users. For example, a user may invoke a telephone network resident service, such as call forwarding or voice mail, by dialing a particular sequence of digits. Other network services may be directed to assisting network owners with network security, authentication, and authentication. Adding or modifying a service requires a change to the communication network.
The most convenient telecommunication network is composed of interconnected switches and communication services. These switches are controlled by integrated or embedded processors, which are operated by specialized software or firmware designed by the switch designer. Typically, the switch manufacturer's software or firmware must support all functional aspects of service processing, call processing, device processing, and network management. This means that when a network owner wishes to implement a new service or modify an existing service, the software of each switch in the network must be modified by the respective switch manufacturer.
The fact that the network includes different switch models produced by different manufacturers requires careful development, testing, and launching of new software. The time required to develop, test, and launch new software is extended because the size of the code on each switch increases more and more complicated with each current modification. Thus, this process may take several years. In addition, this added complexity places a burden on the switch processor, increases the chance of the switch malfunctioning, and may require modification or replacement of the switch.
Furthermore, the fact that multiple network owners rely on a common set of switch manufacturers results in an undesirable situation that limits contention. First, a manufacturer's software release may attempt to incorporate changes requested by several network owners, thus preventing these network owners from providing services that are truly different from the services provided by their competitors. This also forces some network owners to wait for the manufacturer to incorporate requests from other network owners into new releases. Second, a switch software release that incorporates functionality requested by one network owner to implement a new service may not be intended to become accessible to other network owners.
These problems have become intolerable as the demand for new network services has increased during the last five to ten years due to increased user mobility, increased variety of communications and bandwidth, separation of traditional numbering schemes, more complex services and increased competition. Thus, there is a widely recognized need for new network architectures to incorporate more flexible ways of establishing, provisioning, and executing service logic. In order to fully understand the novel structure of the present invention described below, a description of the related art is provided below with reference to fig. 1.
Referring to fig. 1, there is shown a logical representation of various switching fabrics incorporating the present invention. The overall switch, generally designated 20, includes a service processing function 22, a call processing function 24, a device processing function 26 and a switch fabric 28. All of these functions 22, 24, 26 and 28 are hard-coded, mixed and indistinguishable, as indicated by the group 30. In addition, the functions 22, 24, 26, and 28 are designed by the switch manufacturer and run on a dedicated platform that varies from manufacturer to manufacturer. As a result, these functions 22, 24, 26 and 28 can only be modified with the help of the manufacturer, which slows down service development and implementation, increasing the cost to bring new services to the market. Therefore, the development of new and innovative services, call processing, data processing, signal processing, and network operations are limited by the manufacturer's control of its private switch hardware and software, presenting inherent difficulties in establishing and implementing industry standards.
The service processing function 22 is encoded in the overall exchange 20 and allows only local control of this processing based on the local data content and the dialed number. This local information is interpreted by a hard coded processing engine that performs the function of the coding service. The call processing function 24 is hard coded and provides call initiation and call termination functions. This process actually sets up and tears down a single connection to complete the call. Similarly, the device processing function 26 is also hard coded, providing all data processing relating to the physical resources involved in a call. The switch fabric 28 represents the hardware components of switches and computers to run the entire software provided by the switch manufacturer, such as northern telco. The switch fabric 28 provides the physical facilities necessary to establish a connection and may include, but is not limited to, rack equipment (of T1 and DSO), switching matrix equipment (network plane and its processors), connection layer signal processors (SS7, MTP, ISDN, LAPD) and special circuits (conference ports, tone detectors).
IN an attempt to address the problems previously described, the international telecommunication union and european telecommunications standards institute have approved the ITU-T intelligent network standard ("IN"). Similarly, Bellcore approved the advanced intelligent network standard ("AIN"). Although these two standards differ in the state of representation and development, they have almost the same purpose and basic concept. Thus, these two standards are considered to be a single network structure in which the service processing function 22 is separate from the switch.
Using the IN and AIN architecture, network owners can generally launch a new service by creating and developing a new service logic program ("SLP"), which is basically a service independent creation block ("SIBB") table that is invoked on a given type of call. According to this approach, some specialized element type interoperation is combined with SLP to provide services to network users. As a result, any new or possible services will be limited by the existing SIBB.
An IN or AIN structure, generally indicated at 40, logically separates the functions of the overall switch 20 into a service control point ("SCP") 42 and a service switching point ("SSP") and switching system 44. The SCP42 includes the service processing function 22, while the SSP and switching system 44 includes the call processing function 24, the device processing function 26, and the switch fabric 28. In this scenario, the call processing function 24, the device processing function 26, and the switch fabric 28 are hard coded, mixed, and indistinguishable, as indicated by the group 46.
A service switching point ("SSP") is a functional module that resides in a switch to identify when more than one route is required for a subscriber's signaling based only on the dialed number. The SSP suspends further processing of the call when it initiates a query to the remote SCP42 to correct processing of the call, the SCP42 essentially acting as a database server for the number of an exchange. This separation of processing results in the infrequent, but time-consuming task of processing special service calls being offloaded from the switch. In addition, this moderate focus establishes a balance between having an easily modifiable, heavily loaded resource serve the entire network and opening a complete copy for each switch.
Referring now to fig. 2, there is shown a schematic diagram of a telecommunications system employing an IN or AIN architecture, indicated generally at 50. Various subscriber systems, such as ISDN terminal 52, first telephone 54, second telephone 56 are connected to the SSP and switching system 44. The ISDN terminal 52 is connected to the SSP and switching system 44 via a signalling line 60 and a transmission line 62. The first telephone 54 is connected to the SSP and switching system 44 by a transmission line 64. The second telephone 56 is connected to the remote switching system 66 by a transmission line 68, and the remote switching system 66 is connected to the SSP and switching system 44 by a transmission line 70.
As previously described with reference to fig. 1, SSP70 is a functional module that resides in a switch to identify when more than one route is required for a user's signaling based on the dialed number. The SSP70 suspends further processing of the call when it initiates an inquiry to properly process the call. This query is sent to the remote SCP42 in the form of SS7 signaling. The service control point 42 is so named because changing the contents of the database at this location can change the network functions that appear to users connected through many branch exchanges. The query is sent over a signaling line 72 to a signal transfer point ("STP") 74, which is simply a router that sends SS7 messages between these elements, and then over a signaling line 76 to SCP 42.
The integrated services management system ("ISMS") 78 is considered an administrative tool to open or change services or manage access to services for each user. ISMS78 operates primarily by changing the operating logic and data stored in SSP70 and SCP 42. The ISMS78 has various user interfaces 80 and 82. The ISMS78 is connected to the SCP42 by an operating line 84, to the SSP and switching system 44 by an operating line 86, and to the intelligent peripheral ("IP") 88 by an operating line 90. The intelligent peripheral 88 is a device for adding functionality to the network that is not available on the switch, such as a voice response or speech recognition system. IP88 is connected to SSP and switching system 44 by line 92 and transmission line 94.
Call processing according to the prior art will now be described with reference to fig. 2. A call is initiated when the user picks up the receiver and begins dialing a number. The SSP70 on the corporate switch monitors dialing and identifies trigger sequences. The SSP70 suspends further processing of the call until the service logic can be queried. The SSP70 then composes a standard SS7 message and sends it to the SCP42 via STP 74. SCP42 receives and decodes the message and invokes the SLP. The SLI interprets the SCP, which may consult calls for databases that drive other functions, such as for number translation. The SCP42 returns an SS7 message to the SSP and switching system 44 regarding the handling of the call, or otherwise sends a message to the network element to perform the correct service. An SS7 message is sent between the switches at the end of the call to tear down the call and a call detail record is generated by each switch involved in the call. Call detail records are collected, correlated, and broken down offline for each call to derive bills for the toll calls to complete call processing.
IN and AIN structures attempt to predefine a standard set of functions to support all foreseeable services. These standard functions are hard coded into the various state machines in the switch. Unfortunately, any new functionality that may be created in conjunction with new technology and unforeseen services cannot be implemented without extensive inspection and testing of the network software on many vendor platforms. Furthermore, if a new function requires changes to the standardized call model, protocol, or interface, the implementation of services using the function may be deferred until the changes are approved by an industry standard group. However, even if attempts are made to broaden the functions supported by IN and AIN as a drafted standard, equipment providers may refuse to approve these drafted standards because of the tremendous increase IN coding complexity.
Looking further at fig. 2, having the call processing and device processing functions, namely SSP70, operate within the switch creates additional limitations of the IN and AIN structures. As a result, these functions must be provided by each switch manufacturer using their proprietary software. Thus, the network owner still relies heavily on the release of manufacturer software to support new functionality. To further complicate matters, network owners are unable to test the SSP70 module in conjunction with other modules in a unified development and testing environment. Furthermore, the SSP70 intended for the processing environment of a switch manufacturer cannot be guaranteed to be compatible with the service set-up environment of the network owner.
This reliance by multiple network owners on a common set of switch manufacturers results in two undesirable scenarios that limit contention. First, a manufacturer's software release may attempt to incorporate changes requested by several network owners, thereby preventing the network owners from separating their services from those provided by their competitors. This also forces some network owners to wait until the manufacturer joins the requests of other network owners into a new release. Second, a switch software release to implement a new service in conjunction with functionality requested by one network owner may not be accessible to other network owners. Thus, regardless of the intentions of the IN and AIN structures, the network owner's building, testing, and developing new services is still hindered because the network owner has no complete control or access to the functional elements that shape the behavior of the network services.
In another attempt to address these problems, a single switch intelligence and switch fabric ("SSI/SF") fabric, generally indicated at 150 (fig. 1), logically separates SSP70 from switching system 44. Referring back now to fig. 1, switch intelligence 152 includes call processing 24 and device processing functions 26, which are encoded into discrete state tables with corresponding hard-coded state machine engines, represented by circles 154 and 156. The interface between the switch fabric functions 158 and the switch intelligence functions 152 may be extended over a communications network such that the switch fabric 158 and the switch intelligence 152 may not necessarily be physically located together, by executing within the same processor, or even have a one-to-one correspondence. In turn, the switch intelligence 152 provides a simple, non-service specific, non-manufacturer specific, consistent interface to all switches.
An integrated smart computing device ("ICC") 160 includes service processing functions 22 that communicate with a plurality of switch smart elements 52. This approach provides the advantage of the network owner flexibility to implement the service because all but the most basic functions move outside the range of manufacturer specific code. Further improvements may be achieved by providing a more uniform environment for building, developing, testing, and executing service logic.
As previously mentioned, current network switches are based on entirely dedicated hardware and software. Although network switches can cost millions of dollars, such devices are slow in processing from the perspective of currently available computing technologies. For example, these switches communicate with each other switch using, for example, an X.25 data communication protocol based on a reduced instruction set computing ("RISC") processor operating in the 60MHz range, the X.25 protocol typically supporting a 9.6kb/s transfer rate between the various platforms in a switched network. This is extremely slow when compared to personal computers that include processors that operate at 200MHz or higher and high-end computer workstations that provide 150Mb/s and ATM interfaces. Therefore, the network owner needs to be able to use high-end workstations instead of dedicated hardware.
The present invention is directed to a service control system for providing real-time service handling of all events and service requests received at a resource aggregation device, such as switches and routers, physically associated with each of a plurality of distributed service nodes of an intelligent communications network.
In general, the service control component of the present invention is capable of commanding an intelligent network resource aggregation device, such as an ATM switch, Internet gateway, intelligent peripheral, or other switch or router resource, in the processing of service requests, and further includes the intelligence required to process such service requests. In particular, the embedded intelligence enables the service control component to interact with other intelligent network components to access additional logic components or to obtain information (service or user data) needed to process a service logic event. The service control connects and interacts with the resource aggregation device and the local data management system during real-time service processing and has the logic and processing capabilities needed to process service attempts provided by the intelligent network. Service control is managed, updated and manipulated by service administrators and data management components of the intelligent network. The intelligent network is independent of and transparently provides intelligent call processing services to the call switching platform or resource aggregation device within which the call is received, and is readily adapted to handle call events. In this way, reliance on expensive, vendor-specific hardware, operating systems, and switching platforms is eliminated. The distributed intelligent network additionally supports location independent event processing service execution, allows modular software logic programs to run virtually anywhere in the fabric, and provides location independent communication between these distributed processors, further eliminating the need for specialized service nodes.
More particularly, the present invention controls one or more processes that begin when a service request is sent by a resource aggregation device to a service control component. The service control interacts with other components to access data necessary to provide the requested service. The process is completed when the requested sequence of service actions is completed or when the service user revokes use of the service. All resources involved in the service requester requested by the service representative are released at the end of the process. Each service request initiates one instance (thread) of service processing, providing a large amount of parallel processing with few accidents or bottlenecks.
Preferably, each service thread instance maintains its own event queue, while the service control provides an asynchronous path for a particular call instance to a received event to the appropriate service thread queue for storage and execution thereof according to a predetermined priority associated with the event. The service control additionally provides an asynchronous path for events to the switch/resource pool device, or other executing service logic program, while the thread instance is locked while it waits for a response.
According to the invention, the main responsibilities of the service control means include: receiving and processing events or requests from a switching platform or other external resource; identifying and invoking a service logic program to process the incoming request; requesting service or user related data from a data management storage device through a Network Operating System (NOS) or directly through a database Application Program Interface (API); updating service or user related data to the data management part through the NOS; providing the resource aggregation device and other service logic programs with the ability to send prioritized events and messages to control user interaction; receiving from the resource aggregation device a set of messages comprising user input, e.g., a PIN, a (dual tone multiple frequency) DTMF number, corresponding to a selection menu item, etc.; maintaining the status and data of all participants included in the same instance of the service process; and an account record generating function that generates account records and transmits them to the data management part.
The various features of novelty which characterize the invention are pointed out with particularity in the claims annexed to and forming a part of this disclosure. For a better understanding of the invention, its operating advantages and specific objects attained by its uses, reference should be had to the accompanying drawings and descriptive matter in which there is illustrated and described a preferred embodiment of the invention.
The above and other advantages of the invention may be better understood by referring to the following description in conjunction with the accompanying drawings, in which:
FIG. 1 is a logical representation of various switching fabrics;
FIG. 2 is a schematic diagram of a communication system using a typical intelligent network architecture in accordance with the prior art;
FIG. 3 is a schematic diagram of a communication system using an intelligent distributed network architecture;
FIG. 4(a) is a block diagram showing SA and DM components of a next generation intelligent network;
fig. 4(b) conceptually shows the function of the service handling part 300;
fig. 4(c) shows a functional structure of the data management section 400;
FIG. 5 is a logical and functional schematic diagram of a communication system using an intelligent distributed network architecture in accordance with the present invention;
FIG. 6 is a schematic diagram showing the layering of functional interfaces within an intelligent call processor in accordance with the present invention;
FIG. 7 is a schematic diagram illustrating the class hierarchy of managed objects within an intelligent call processor in accordance with the present invention;
FIG. 8 shows a preferred structure of a service control environment 430;
FIG. 9 shows the functional structure of the NOS NT and LRN functional subcomponents;
FIG. 10 shows the structure of a resource management system for an intelligent network;
FIGS. 11(a) and 11(b) illustrate a layer 3 intelligent network resource management function;
fig. 12(a) shows the SLEE startup process;
FIG. 12(b) shows a service manager process;
FIG. 12(c) shows SLEE class loader processing;
FIGS. 12(d) to 12(e) show flowcharts illustrating the service broker function;
FIG. 12(f) shows thread manager processing;
FIG. 12(g) shows event handling after a service broker;
FIGS. 13(a) through 13(c) illustrate an exemplary process flow for performing a 1-800/8xx call processing service;
FIG. 14 shows a call processing scheme serviced by the IDNA/NGIN.
The present invention is a component of an integrated intelligent network, which is referred to herein as an intelligent distributed network architecture ("IDNA") or next generation intelligent network ("NGIN"). As explained herein, the NGIN architecture is designed to perform intelligent call processing services for any type of call received at a resource aggregation device or switching platform, such as a switch, router, IP end address, etc. The IDNA/NGIN preferably includes a plurality of distributed service nodes, each providing an execution environment providing call processing functionality necessary to process calls in instances received at a switch or resource aggregation device physically associated with the particular service node. The NGIN has a highly scalable architecture designed to ensure that executable service objects are implemented as stand-alone service logic programs ("SLPs"), and associated data to perform event services such as 1-800 phone calls, send faxes, can be scheduled and maintained at the service node in a cost-effective manner. By using CORBA-compliant object request broker technology, the intelligent network is independent of and transparently supports location and platform independent call processing service execution for event exchange platforms or resource aggregation devices that receive an event or call, and allows high-level logic programs to run virtually anywhere in the network independent of the service execution platform. In addition, the system provides location independent communication between these distributed processes.
Referring now to FIG. 1, an intelligent distributed network architecture ("IDNA") is indicated generally at 170. The present invention integrates the switch intelligence 152 of the ICC 160 and the SSI/SF fabric 150 into an intelligent call processor ("ICP") 172. Unlike the IN or AIN of the SSI/SF structure 40, whose functions are defined IN the state table, the ICP172 includes the service control function 22, the call processing function 24 and the device processing function 26 as objects managed within one object-oriented platform, which is indicated by blocks 174, 176 and 178, respectively. The ICP172 is logically separated from the resource aggregation device 180.
Referring now to fig. 3, a communication system employing an intelligent distributed network architecture in accordance with the present invention is illustrated and generally designated 200. Wide area network ("WAN") 202 is a system that supports the distribution of applications and data over a wide geographic area. The transport network is based on a synchronous optical network ("SONET") and connects IDNA nodes 204 and allows applications within these nodes to communicate with each other.
Each IDNA node includes an intelligent call processor ("ICP") 172 and a resource aggregation device 180 (fig. 1). Fig. 3 shows IDNA node 204 having one resource aggregation device a ("RCA") 206 and one resource aggregation device B ("RCB") 208. The ICP may be connected to an adjunct processor 210 that provides existing support functions such as provisioning, billing and recovery, however, these functions may be absorbed by the functions provided by a network management system ("NMS") 212. However, in the preferred embodiment, these support functions may be provided by a centralized service management ("SA") system 500, the system 500 having a data management ("DM") component 400, which will be described herein with reference to fig. 4 (a). As further shown in fig. 3, the ICP172 may also be connected to other ICP' 172, other networks (not shown), or other devices (not shown) via a direct connection 214 with signaling 216 and a bearer connection 218. Direct connection prevents delays between connected devices and allows the devices to communicate in their own language. ICP172 is the "brain" of IDNA node 204 and is preferably a general purpose computer that can vary from a single processor with a single memory storage device to a large computer network, depending on the processing requirements of IDNA node 204. Preferably, the general purpose computer has redundant processing, memory storage and connections.
Here, a general purpose computer refers to a commercially off-the-shelf component or computer assembled therefrom as opposed to a specialized device specifically formulated and designed for telephone exchange applications. The integration of a general purpose computer in a calling network provides many advantages.
The use of a general purpose computer gives the ICP172 the ability to use additional hardware extensions to meet the increased processing demands. These additions include the ability to increase processing power, data storage, and communication bandwidth. These additions do not require modification of the manufacturer specific software and/or hardware on each switch in the calling network. Thus, new services and protocols can be implemented and installed on an overall scale without requiring modification of individual devices in the switching network. The present invention provides the aforementioned advantages and increased capabilities by changing the overall switch 20 (fig. 1) to the intelligent call processor 172.
Where applications require more processing power, multiprocessing allows the use of less expensive processors to optimize the price/performance ratio of call processing. In other applications, it may be advantageous to have to or more cost effectively use multiple high power machines, such as minicomputers, having higher processing rates.
As described above, ICP172 may comprise a cluster of general purpose computers running under, for example, UNIX or Windows NT operating systems. For example, in one large application, supporting up to 100,000 ports on a single resource aggregation device, ICP172 may consist of 16 32-bit processors operating at 333MHz cascaded in a symmetric multiprocessor. These processing scenarios may be divided, for example, into 4 separate servers, each with 4 processors. A single processor will be connected to a system area network ("SAN") connection or other concatenation technique. Clusters of processors may share access to a redundant array of independent disks ("RAID") module data storage device. Shared memory may be accommodated by adding or removing modular disk storage devices. The servers in the cascade preferably share redundant connections to RC180 (fig. 1).
As described and in the context of the "plug and play" feature of a computer, the ICP software architecture is an open process model that allows (1) management software; (2) ICP application: (3) computing hardware and software; (4) a resource aggregation device component; and even (5) interchangeability of service structures and processes. This type of structure reduces maintenance costs due to standardization and provides benefits resulting from economies of scale.
In this way, the present invention can split development efforts and use modular tools, which can result in faster development and implementation of services. Furthermore, the use of service management and its related aspects are within the control of the network operator on an as-needed basis, as opposed to limitations imposed by fixed messaging protocols or by specific combinations of hardware and software supplied by a given manufacturer.
By using managed objects, the present invention also allows services and functions to be distributed within the network flexibly ("where you want it") and dynamically ("ad libitum") according to any number of factors, such as capacity and usage. Performance is improved because service processing 22 (fig. 1), call processing 24 (fig. 1), and device processing 26 (fig. 1) operate on one and the same platform. In addition, the present invention allows monitoring and manipulation of call sub-elements that were previously inaccessible. The present invention also provides for monitoring the use of functions or services so that they can be deleted when they are out of date or unused.
Resource aggregation device ("RC") 180 (fig. 1) is a collection of physical devices or resources that provide bearer, signaling, and connectivity services. The RC180, which may include intelligent peripherals 88, replaces the switch fabrics 28 and 158 (FIG. 1) of the IN or AIN or SSI/SF fabric. Unlike IN or AIN architectures, the control of resource aggregation devices such as RCA206 is at a lower level. Further, the RCAs 206 may include more than one switch fabric 158. A switch fabric 158 or other user interface (not shown) connects the plurality of users and the switching network via standard telephone connections. These subscriber systems may include ISDN terminal 52, fax machine 220, telephone 54, and PBX system 222. The ICP172 controls and communicates with the RC180 (FIG. 1), RCA206, and RCB208 via a high speed data communication line (minimum 100 Mb/sec Ethernet connection) 224. RC180, 206, and 208 may be modeled as a printer, and ICP172 may be modeled as a personal computer, where the personal computer uses a driver to control the printer. The "driver" in IDNA node 204 is a resource aggregation device proxy ("RCP") (not shown), which is described below with reference to fig. 5. This allows manufacturers to use this interface to provide an IDNA compliant node without having to rewrite all of their software to incorporate the IDNA model.
IN addition, control of the resource aggregation device 180 (FIG. 1), RCA206, and RCB208 is at a lower level than is typically provided by the AIN or IN structures. As a result, resource aggregation device manufacturers need only provide a single interface to support device and network management processes; they do not have to provide call and service processing specific to the network owner. The low-level interface is abstracted into more discrete operations. Having a single interface allows the network owner to choose from a large number of resource pool device manufacturers based on price and performance decisions. Intelligence is added to ICP172, rather than RC180, which decouples RC180 from the change and reduces its complexity. Because the role of RC180 is simplified, changes are more easily implemented, making it easier to migrate to alternative switching and transmission technologies, such as asynchronous transfer mode ("ATM").
Intelligent peripheral ("IP") 88 provides the ability to process and perform the information actions contained within the actual call transmission path. The IP88 is typically within a single resource aggregation device, such as the RCB208, and is controlled by the ICP172 in a similar manner as the RCA 206. IP can provide the ability to process data within the actual transmission path in real time using digital signal processing ("DSP") techniques.
A network management system ("NMS") 212 is used to monitor and control hardware and services in IDNA network 200. The proposed NMS212 implementation may be a communication management network ("TMN") compliant architecture that provides management of the components in IDNA network 200. More specifically, NMS212 controls the provisioning of services, maintains the health of these services, provides information about these services, and provides network level management functions for IDNA network 200. NMS212 accesses and controls services and hardware through proxy functions in IDNA node 204. An ICP-NMS agent (not shown) in IDNA node 204 executes commands and requests issued by NMS 212. The NMS212 may directly monitor and control the RCAs 206 and RCBs 208 over one standard operational connection 226.
As further shown in FIG. 3, a managed object Generation Environment ("MOCE") 228 includes subcomponents to produce a service that runs within an IDNA network. Service independent building blocks and API representations used by service designers to generate new services are embedded into the basic subcomponents of a MOCE, a graphical user interface ("GUI"). MOCE is a unified set of tools that exist on a single user environment or platform, otherwise referred to as a service set-up ("SC") environment. It represents a collection of operations required in the service establishment process, such as service documentation, managed object definitions, interface definitions, protocol definitions and data entry definitions contained in the managed object, and service tests. The network owner only needs to develop the service once using MOCE228 because the managed object can apply to all nodes on its network. This is in contrast to network owners having each different switch manufacturer develop their version of the service, which means that the service must be developed multiple times.
MOCE228 and NMS212 are connected together through a registry 230. The registry 230 contains managed objects that are distributed by the NMS212 and used in the IDNA/NGIN node 204. The registry 230 also provides a buffer between the MOCE228 and the NMS 212. However, the MOCE228 may be directly connected to the NMS212 to perform "live" network testing, indicated by dashed line 232.
In accordance with a preferred embodiment of the present invention, shown in fig. 4(a), the IDNA/NGIN system includes a centralized service management ("SA") component 500 that, along with the added capability, provides both memory (registry) 230 and Network Management (NMS)212 functions of the IDNA system 170. In general, the SA component 500 shown in FIG. 4(a) supports offline storage, naming, distribution, activation and removal of all services and data for the IDNA/NGIN system, and additionally provides data management ("DM") functionality that allows runtime storage, replication, synchronization, and availability of data for use by service objects in the IDNA/NGIN service node.
More particularly, as conceptually represented in FIG. 4(b), the service management component 500 is to perform all functions required to manage, store, and distribute all services and service data used by the IDNA service processing nodes and to configure both the hardware and software implemented in the system IDNA/NGIN. In general, as shown in FIG. 4(b), the SA component 500 is responsible for: receiving data from MOCE (service setup) 228, user command data from command entry and other legacy (legacy) systems 229 to provide IDNA/NGIN systems for use by a user; such as requested by the MOCE/SCE user during service processing, deployment data, service independent building blocks ("SIBBs"), service logic program ("SLP"), and other service logic components 503, such as to MOCE 200; receive completed and tested service components, SIBBs, SLPs or other service logic or data components 506 from MOCE 228; providing each service component with a unique name; and distributes the data and each service component 509 to the data management function component 600, which is described in greater detail herein. In addition, as shown in fig. 4(a), the service manager 300 maintains a registry 230, which includes a comprehensive database ("DBOR") containing records of all IDNA services and data, from which the data management component 600 receives all of its data.
Other responsibilities of service management include: the data and services component 512 is activated to ensure that all data, SIBB and managed objects or service logic programs SLP are available to the node through the data management component 600; register the names of the data, SLP and SIBB515 by supplying their logical names to the network operating system ("NOS") component 700, described in detail below, for registration therewith; deactivate data and services component 518; and data and services component 518 is removed from the IDNA/NGIN system through data management component 600. Service management performs configuration management functions by maintaining the state and services (pre-test, post-test, deployment, etc.) of each SIBB in addition to its description by its naming process. This can ensure that a service cannot be deployed until all components of the service have been unsuccessfully tested and configured.
As further shown in fig. 4(b), service management component 500 additionally performs provisioning and provisioning (provisioning) IDNA/NGIN service node 204 according to the configuration information received by the SA. In particular, the SA component 500 determines, based on the received configuration information, the capabilities of each component at each service node 204, which services and data are to be distributed to which node, which services are to be run on the server on which the service node resides, and which data is to be cached in local memory residing in association with the IDNA/NGIN node server. In particular, the SA deploys the configuration rules contained in the service profile (configuration) file 580 to the local (node) resource manager ("LRM") component 575 of the NOS system 700 for storage into a local LRM cache located at each service node. These configuration files 580 determine which services are executing on a given IDNA node. The LRM first reads this service profile stored in the local cache at the node and decides a specific service layer execution environment ("SLEE"), e.g., a virtual machine, to run the service and which services to run actively in the SLEE (as persistent objects) according to rules in the service profile, or just as needed.
Referring back to fig. 4(a), the NGIN data management component 600 plays a role in both service lifecycle and service usage capabilities. Where the service management component maintains a comprehensive database (registry) of records, the data management component 600 provides local data storage and data management functions for each IDNA/NGIN service node. This includes all types of data, including: service programs and SIBBs, data for the service (user profile, phone number, etc.), multimedia files (such as audio files for interactive voice response ("IVR") services), etc. In particular, the data management component 600 of one service node receives a summary of the SA integrated DBOR, including all the data required for the service performed by the local NGIN service node specified by the service management. The mechanism of which is described in detail below with respect to fig. 4 (c).
In the preferred embodiment, the data management component 600 of the SA component provides local data storage and management functions for each IDNA/NGIN service node. In particular, the data management stores data received from the service management into one or more databases and facilitates service/data usage by the service control environment by caching the required data into memory resident within the service control computer or in a co-existing database server so that the service/data can be provided to the service control service with minimal delay. More generally, the data management section 600 performs real-time storage, replication, synchronization, and availability of data on the occasion of receiving the data from service management or as a result of service processing. These DM functions can be further categorized as: 1) a data warehousing function; 2) a data manipulation function; 3) a data use function; 4) an account record generation function.
Referring now to fig. 5, a logical and functional diagram of a communication system using an intelligent distributed network architecture 200 in accordance with the present invention is illustrated. The illustrated ICP172 includes an ICP-NMS agent 240 and a SLEE242, which in turn hosts various managed objects 246, 248, 250 and 252 derived from a managed object base class 244.
Generally, a managed object is a method of a software function package, in which each managed object provides both a function and a management interface to implement the function of the managed object. The management interface controls the functions to whom and what can access the managed objects. In the present invention, all of the telephony application software run by IDNA/NGIN node 204, except the infrastructure software, is implemented as managed objects and supporting libraries. This provides a uniform interface and implementation to control and manage the IDNA node software.
The set of network elements that connect, route, and terminate carrier traffic handled by the nodes will be collectively referred to as the resource aggregation device 180. Service processing applications running on the SLEE use resource broker ("RCP") 244 as a control interface to RC 180. RCP244 can be implemented as a device driver that adapts device-independent commands from objects in the SLEE to device-specific commands to be executed by RC 180. The RCP244 may be illustrated as an interface that implements basic commands common between vendors of resources in the RCP 224. RCP244 may be implemented as one or more managed objects running on IDNA node 204 as shown. Alternatively, this function may be provided as part of RC 180. NMS212, registry 230, and MOCE228 are consistent with the description of those elements in the discussion of fig. 3-5 (a).
Note that operative link 26 directly connects NMS212 to RC 180. This corresponds to the more traditional role of the network management system in monitoring the operational status of the network hardware. This can be done independently of the IDNA structure (e.g., by using the well-known TMN method). In addition, RC180 may be connected to other resource aggregation devices 254. A direct signaling connection 214 is also shown into ICP172 so that signaling 216, such as SS7, may be directed into the call processing environment. By intercepting signaling at the network peripherals, SS7 messages may go directly to ICP172 without going through RC 180. This reduces delay and improves robustness by shortening the path. An accompanying carrier line 218 is connected to RC 180.
Fig. 6 shows the hierarchy of functional interfaces in the ICP 172. MOCE228 is a system that generates managed object software and its dependencies. The NMS212 controls the execution of ICP172 through an agent function, referred to as ICP-NMS agent 240, provided within ICP172 by an interface. The NMS212 controls the operation of a local operating system ("LOS") 260 on the ICP 172. The NMS212 controls the operation of the ICP172, including start and stop processes; inquiring the content of the processing table and the state of the processing; configuring operating system parameters; and monitoring the performance of a general purpose computer system housing ICP 712.
The NMS212 also controls the operation of a wide area network operating system ("WANOS") 262. The NMS212 controls the initialization and the WANOS library configuration of the operation of the WANOS support process through its control of the LOS260 and through any other interface provided by the NMS SLEE control. The NMS212 controls the instantiation and operation of one or more SLEEs 242 running on the ICP 172. LOS260 is a commercially-available operating system for operating a general-purpose computer system. WANOS262 is a business-ready middleware software package (e.g., an object request broker) that facilitates seamless communication between computing nodes. SLEE242 is responsible for the execution of managed objects 244, which are software instances that implement service processing structures. SLEE242 implements means of controlling the execution of managed objects 244 through ICP-NMS agent 240. Thus, SLEE242 is a software process that enables the deployment and cleaning of managed object software, the specification and destruction of managed object instances, the support of managed object interactions and collaboration with managed objects, the management of access to native libraries 264, and the interfacing with NMS-ICP agents 240 in the control required for implementation.
The native library 264 is a library that is coded to rely only on LOS260 or WANOS262 and local general purpose computer execution (e.g., compiled C library). They are primarily used to supplement the native functions provided by SLEE 242.
SLEE library 266 is a library encoded for execution within SLEE 242. They can access the functions and native libraries 264 provided by SLEE 242. Managed objects 244 are software loaded and executed by SLEE 242. They can access the functions provided by SLEE242 and SLEE library 266 (and possibly native library 264).
The ICP-NMS agent 240 provides the NMS212 with the ability to control the operation of the ICP 172. The ICP-NMS agent 240 implements the ability to control the operation and configuration of the LOS260, the enablement and configuration of the WANOS262, and the specification and operation of the SLEE 242. The proposed service handling structure operates at an increased level of abstraction. However, from the perspective of SLEE242, there are only two layers: a managed object layer, which is an object layer (software instance) that interacts under the control of the NMS 212; and library layer 254 or 266, which is a software layer (either native to SLEE242 or LOS260) that provides complementary functionality to the operation of managed object 242 or SLEE242 itself. However, it is contemplated that at some point NMS212 may relinquish precise position control over the managed object instance. For example, managed object instances may be allowed to migrate from one node to another according to one or more algorithms or events, such as in response to a command.
It should be understood that collectively, the LOS and WANOS functions may be represented as network operating systems or "NOS," which functions to provide platform-independent and location-independent connectivity between IDNA/NGIN system components as shown in FIG. 6. That is, NOS includes a set of network-wide services that provide processing interfaces and communications between other IDNA/NGIN functional components and subcomponents. Among the services provided by NOS are object connectivity, logical name translation, inter-process communication, and local and system-wide resource management ("RM"). For example, as shown in fig. 4(a), the NOS component 700 provides local (NODE RM) and system-wide resource management (SYS RM) functions. In particular, the NOS component accommodates the location of any service from a process that requires both a service and data, so that a process need only call a single logical name. The NOS component then decides which service instance to use and provides the connection to that instance. The NOS700 portion allows for the widely distributed nature of the IDNAs/NGINs and platform independence of the IDNAs/NGINs. For example, the aforementioned logic program uses the NOS component 700 to invoke other logic programs, and thus can invoke other logic programs running on different SLEEs on the same service node or a remote service node. In particular, with the SA500, one service node can be designated to perform only certain services. When a call arriving at a switch has an associated service node 204 for which the desired service cannot be performed, such as joining a conference bridge (conference bridge), the IDNA may need to direct the call to another node configured to provide such service. Preferably, IDNA invokes required services at another remote service node through NOS component 700, performs call processing, and provides services in response to the switch at the original node.
Referring now to FIG. 7, the class structure of managed objects in accordance with the present invention is illustrated. The abstract base class managed object 244 includes common functionality and virtual functionality to ensure that all exported classes can be properly supported as objects in the SLEE 242. Specifically, 4 different sub-classes are shown, a service control class 252, a call control class 250, a bearer control class 248, and a resource agent class 246.
The service control class 252 is a base class for all service function objects. Dialog manager class 280 contains information and activities related to the dialog. A dialog may include one or more calls or other calls to network functions. Dialog manager class 280 provides a unique identifier for each dialog. If call processing occurs in a nodal fashion, billing information must be collated. A unique identifier for each call makes the collation easy instead of requiring costly correlation processing. In service processing, protocols are reversed by successive abstraction layers. Finally, the protocol is abstracted enough to guarantee allocation/instantiation of the dialog manager (e.g., in SS7, receipt of an IAM message will guarantee that there is dialog management).
The bearer capability class 282 changes the quality of service on the bearer. The service control class 252 can change the quality of service ("QoS") of a call or even change the bearer capability, such as moving from 56 kbits/sec to a higher rate and then back. The QoS is managed by a connection manager class 302. For example, the half-rate sub-class 284 downgrades the QoS of a call to a 4 KHz sampling rate, rather than the usual 8KHz sampling rate. Stereo subclass 286 may allow users to form two connections to support a left channel and a right channel in a call.
The service arbitration class 288 orchestrates service conflicts and service interaction responses into a classical law. This is required because the service control classes 252 may conflict, particularly when initiating and terminating services. For many practical reasons, it is undesirable to encode in each service control class 252 to know how to resolve each other type of conflict with the service control class 252. Instead, when a conflict occurs, references to conflicting services and their pending requests are passed to the service arbitration class 288. The service arbitration class 288 may then decide on the appropriate course of action, perhaps taking into account local text, configuration data, and subsequent queries for the conflicting service object. Having the service arbitration class 288 allows conflict resolution algorithms as opposed to hard-coded or implicit mechanisms to be explicitly documented and coded. Furthermore, when updating or adding a service, existing services do not have to be updated to account for any conflicting changes, which may require changes to multiple relationships within a single service.
Feature class 209 implements a standard set of capabilities associated with the phone (e.g., 3-way call, call waiting). One such capability may be an override 292 that enables the originator to disconnect an existing connection in order to reach the intended recipient. Another common capability may include a call block 294 so that the originating provider may be rejected based on a set of criteria regarding origination.
Other services are selectively invoked during call processing using the service area classification 296 and are subdivided as services themselves. The service partition class 296 provides flexible, context sensitive service activation and avoids the need to have fixed code within each service object to decide when to activate the service. The activation sequence is separate from the service itself. For example, user A and user B access the same set of features. User a chooses to selectively invoke one or more of his services using a particular set of signals. User B would like to use a different set of signals to activate his services. The only difference between these users is the way they activate their services. It is then desirable to select the process separately from the service itself. There are two available solutions. The service selection process for users a and B may be encoded in separate service zone classifications 296 or one service zone classification 296 may use one profile for each user to indicate the appropriate information. This can be generalized to more users whose service groups are disintegrated. In addition, the use of the service zone classification 296 may change the mapping of service access based on the context or progress of a given call. This type of implementation allows various call participants to activate different services using potentially different activation inputs. In the prior art, all switch vendors deliver inflexible service options that hinder this capability.
The media independent service class 298 is one type of service control class 252 such as store and transfer 300, broadcast, redirect, pre-emption, QoS, and multi-party connectivity, which may be applied to different media types, including: voice, fax, email, and others. If one service control class 252 is developed, it can be applied to each media type, and then the service control class 252 can be broken into reusable service control classes 252. If the service control class 252 is broken into media dependent and media independent functions (i.e., a media independent SC that implements a service and a set of media dependent reverters SC-one for each media type). As derived from the media independent class 298, the storage and transfer class 300 provides the general ability to store messages or data streams of certain media types, and then deliver it later according to certain events. Redirection provides the ability to move a connection from one logical address to another based on specified conditions. This concept is the basis for call forwarding (all types), ACD/UCD, WATS (1-800 services), find me/follow me and mobile roaming, etc. Pre-emption, whether negotiated or otherwise, includes services such as call waiting, priority pre-emption, and the like. QoS modulated connection implementations enable additional services such as voice/fax, streaming video, and file transfer over packet networks. The multi-party connection comprises 3-way video conference, N-way video conference and the like. While user control and input is primarily implemented using keys on the phone, it may be desirable to use voice recognition for future user control and input.
The connection manager class 302 is responsible for coordinating and arbitrating the connections of the various bearer controls 248 involved in a call. In this way, the complexity of managing connectivity between parties in multiple calls is accommodated and removed from all other services. Service and call processing are decoupled from the connection. This breaks the paradigm of call-to-connection as a one-to-many mapping. Mapping calls to calls is now many-to-many.
The connection manager classes 302 within a structure are designed to operate individually or cooperate as equivalents. In operation, the service control class 252 submits requests to the connection manager class 302 to add, modify, and clear call segments. It is the responsibility of the connection manager class 302 to implement these changes. Note that: because connections can be considered as either resource entries or contributing resources, the connection manager class 302 can be implemented as a proxy or an aspect of basic resource management functionality.
Call control class 250 implements basic call processing, such as the basic finite state machines commonly used for telephony, and specifies how call processing occurs. There are two categories that can be derived from the functional division of originating (placing a call) 304 and terminating (receiving a call) 306.
The specific signals and events that direct the bearer control class 248 to change to or from the resource aggregation device 180 through the resource agent 246 are generic signals and events that can be understood by the call control object 250. A desirable role of an object derived from this class is to gather information about the originating end of a call, such as subscriber line number, class of service, type of access, etc. The subclasses may be distinguished according to the circuit or channel number associated with the signaling. These may include a channel-associated class 308 that imposes a single signaling channel in every 23 bearer paths in an ISDN basic interface 310. A channel single class 312 is represented by an analog phone 314 that uses dialing to control a single circuit, and a channel common class 316, represented by SS7 signaling 318, that is completely separate from the bearer channels.
The resource proxy class 246 is used to interface the execution environment to the real world switches and other elements in the carrier network. Examples of internal states implemented at this level and inherited by all downstream classes are in-service and out-of-service and not in-use. The expected derived classes are phone 320 (which is a standard proxy for the standard 2500 suite), voice response unit ("VRU") 322 (which is a standard proxy for voice response units), IMT trunk connection 324 (which is a standard proxy for digital trunk (I1/E1) circuitry), and modem connection 326 (which is a standard proxy for digital modems), corresponding to a particular resource type within the resource aggregation device 180. A preferred manner in which the service control component can service incoming service requests will now be described with reference to fig. 10, which particularly shows another embodiment of a service control environment 430 having SLEE applications 450, 450' executing within the operating system of a service control server, such as a general purpose computer 440.
As shown in fig. 8, SLEE450 is a java "virtual machine" designed to execute at least 5 classes of logic programs (objects) implemented in executing call processing services and other support services: 1) feature classifier logic ("FD") 510, which are functional subcomponents of the service control class/service classifier class 296 (fig. 7), that first receives a service request from the switching platform, decides which service to perform the call based on some available criteria, such as the number of a call, and then invokes the appropriate service logic to process the call; 2) service logic program ("SLP") objects 520, which are functional subcomponents of the service control class 252 (fig. 7) that perform service processing for received service requests or events; 3) line logic program ("LLP") objects 530, which are functional subcomponents of the call control class 250 (FIG. 7) that maintain the current state of a network access line; 4) event logic program ("ELP") objects 540, which are functional subcomponents of the service control/dialog manager class 260 (fig. 7) to which all other logic programs write events; 5) call logic program ("CLP") objects 545, which are functional subcomponents of the service control/connection manager class 302 (FIG. 7), maintain the state of a complete call by providing a connection point for all other logic programs involved in a call process. Each of these logic programs is embodied as a software object, preferably written in the Java programming language, which may be instantiated temporarily or continuously, as will be described later. The IDNA/NHIN service control structure is designed so that these objects are written only once in MOCE/SCE, but can be deployed in SLEEs on any type of computer and on any type of operating system anywhere in the network.
One great particularity is that FD510 is a static subcomponent that 1) first receives a service request from a resource aggregation device, such as a switch, when the switch recognizes that the service is to be handled by IDNA/NMIN; 2) analyzing information associated with the service request; 3) it is decided which SLP can handle the service request. Preferably, the FD may be a system process or an instantiated object for receiving data provided from the resource aggregation device, including, but not limited to, called number, calling number, originating switch ID, originating trunk group, originating line information, and network call ID. Through NOS, FD510 initiates instantiation of the appropriate SLP, CLP and originating LLP to process the call. Preferably, the FD510 is a persistent object, is not tied to a particular call or event, and is always actively running within the service control SLEE 450. Depending on the complexity of the analysis performed and the size of the request to the FD, it can be actively run by one or more instances of the FD within the service control SLEE450 in order to share load and guarantee real-time efficiency. For example, one FD may be used to analyze received SS7 message data, while another FD may be used to analyze ATM message data.
The Line Logic Program (LLP)530 is a functional sub-component that: 1) maintaining a current state of a network access point, connection, or line; 2) querying data management for features associated with the physical point, connection, or line; and 3) apply features such as call drop, call waiting, call forwarding, and overflow routing when commanded on a call occasion. There is one LLP, hereinafter referred to as "LLPO", associated with a line on which a call is initiated and one to which a call associated with a point, connection, or line ends, hereinafter referred to as "LLPT". Once a circuit logic program instance is implemented, it registers in the switch fabric. As will be explained below, the routing logic 530 sends all event data to the ELP subcomponent of the same instance of the service process.
Dynamic sub-components are components that are dynamically constructed from different phases of a service process, and are torn down when an instance of the service process is completed, including: event Logic Program (ELP); a Call Logic Program (CLP); and a Service Logic Program (SLP).
Event Logic Program (ELP)540 is a functional subcomponent for maintaining real-time event data generated during service processing and recording all event data that occurs during service execution. The event logic program is preferably instantiated by the call control process at the switch when an event is first received. When the switch sends a service request to the NGIN, it passes the address of the ELP so that event data can be sent to the logic program bound to the call. The event logic program is accessible to all functional sub-components within the same instance of service processing, namely CLP, LLP and SLP related to the call. When each service processing section processes the call during execution of the service, it writes event data to the ELP according to the rules established in advance by the NOS. When a call is completed, the event data in the ELP is written to a data store or log file, from which it is then compiled into billing records and sent to downstream systems for generating billing, traffic/usage reports, and other office support functions. In particular, the ELP performs the functions: 1) collecting network events resulting from a particular call; 2) formatting call history records of when the event is, such as call detail records ("CDRs"), billing data records ("BDRs"), switch event records, and the like; 3) the information is verified, checked and stored, for example in data management, for future transmission to downstream systems, for example for user billing. It should be understood that the rules that determine which event is written to the ELP are established at the time of service creation. The event data is additionally accessible by the pseudo-management and network management system.
The Call Logic Program (CLP)545 is a functional subcomponent that maintains the state of each SLP involved in service processing and provides a processing interface among all Services (LPs). In one embodiment, a CLP instance is implemented by the FD when an event service request is first received for a call, or may be instantiated by a call control component located at the switch. Alternatively, the CLP545 may be instantiated by one SLP510 at some point during service processing according to trigger points programmed into the SLP; in this way, the instantiation of a CLP may be specific to a service. The call logic program, when instantiated, receives the addresses of all functional sub-components within the same instance of the service process, i.e., SLP, LLP, and ELP. The ELP then associates the SLP, LLPO, LLPT, and ELP for the call, and is accessible by all of these sub-components within the same instance of the service process. That is, the call logic program is a connection point for communication between the SLP and the LLP involved in the same instance of service handling. When a call is completed, the CLP notifies all subcomponents within the same instance of the service process that the call is completed, which will initiate the tear down process for the logic program.
Service Logic Program (SLP)520 is a dynamic subcomponent that provides the logic needed to perform a service. The SLP is tied to services other than calls and performs services and features contained therein for calls. Features that the SLP may apply to for one service include, for example, call routing algorithms and IVR services. The SLP may be a persistent object for frequently used services or may be instantiated when needed by the FD for infrequently used services or terminated when the call is completed. Whether a certain SLP activates a profile 580 generated by service management for that service at all times or at certain times or only when needed, as shown in fig. 11. Preferably, the SLP accesses CLP and ELP subcomponents within the same instance of the service process.
Not all SLPs are associated with a particular call service, and some SLPs may be used for processes required or invoked by other SLPs. Thus, for example, an SLP for 800 services may need to invoke an SLP to query the line information database to perform its task of routing call changes. One SLP may also pass call processing control for one call to another SLP. Preferably only one controlling SLP is executed for a single instance of service processing at a time. Any event data generated as part of the servicing task performed by the SLP is sent to ELP component 540 within the same instance of servicing.
An SLP may not be able to execute directly within an operating system because it does not contain all the information that is executed for one operating system. Further, if the SLP needs to be executed in a different operating system without changing format and content, NOS middleware is provided between the SLP and the operating system to maintain the SLP's consistency on the operating system.
As further shown in fig. 8, other processes performed within SLEE450 for support and operational functions include: a service management ("SM") object 554, which is responsible for loading, activating, deactivating, and clearing services running in the SLEE, and additionally monitoring all other services running within its SLEE, and reporting status and usage data to NOS; a NOS client process 558, which is a NOS class library, that interfaces NOS services and is used by all services running within the SLEE to invoke NOS services, i.e., gate to NOS; a thread manager ("TM") 557 that provides the functionality required for the NGIN service to execute concurrently without binding all SLEE resources; and a data management API ("DM API") 410 for interfacing with the local cache 415 and the cache manager component of the DM400, which is described herein with reference to FIG. 4 (c).
Another service instance loaded in the SLEE shown in fig. 8 includes a service agent ("Sag") instance 559 and a thread manager instance 557 associated therewith for service activation at a service node, as described in more detail below.
Fig. 12(a) illustrates the (slee.java) processing steps that provide the main entry point to the SLEE process. As shown in fig. 12(a), step 602 assumes that a DM system component is available, that the NOS site locator system includes a NOS client process 558 and a NOS host process 560 (fig. 11) which provide a NOS class library for interfacing with NOS services and for use by all services running within the SLEE to invoke NOS services, that the NOS site locator system is operable to receive logical name and object reference registrations, and that the service control server operating system, e.g., windows nt, UNIX, PC, etc., can initiate SLEE processing, e.g., by identifying a bootstrap call, such as main () or fork (). It should be appreciated that the NOS master component 560 (FIG. 8) directly interfaces with the operating system of the computer, the NOS client processes 558, and other system components 571. Preferably, there is a NOS master process 560 located on the network or a local node that interfaces with the NOS client object 558 on each SLEE and includes a library of NOS classes for providing NOS services. Next, at step 604, a service control configuration file is entered and parsed to create a configuration object, which may include a hash table containing key-value pairs, as indicated at step 606. SLEE accepts two parameters: a name and a profile. The name parameter is a unique NGIN name string that is used by the NOS locator service to identify this instance of the SLEE, i.e., to register itself with the NGIN locator service by the SLEE (step 612), and to use the configuration file by the locator service to find its site locator. For example, the table can be used to find SLEE configuration characteristics. When NOS implements CORBA, basic CORBA functionality is then initialized at step 608. Next, at step 610, a SLEE class loader instance is implemented and a NOS locator proxy service instance is implemented within the SLEE, as indicated at step 612. Next, as indicated at step 615, a Service Manager (SM) class is loaded, instantiated, and bound with the home agent NOS locator service object via the class loader. It should be understood that the local locator service delivery service manager is limited to other locator services within the NGIN domain. As will be explained below with reference to fig. 12(b), service management requests for load, activate, deactivate, and clear services can be processed to/from the SLEE after registration of a service manager object with the locator service. Finally, as indicated at step 618, a SLEE thread is executed that processes the event loop, which keeps the SLEE running and allows the SLEE to process NOS events as they arrive through a service manager ("SM") or service agent ("Sag") object, as described in detail below.
Fig. 12(b) represents the (servicemanager impl. java) processing steps performed by the service manager object instance 554 (fig. 8), which is implemented as discussed above with reference to step 615 of fig. 12 (a). Preferably, the SM object implements an ORB interface for performing service management operations on behalf of NOS. This process represents the steps taken by the SM instance to load, activate, deactivate, run, and end services within the SLEE, such as by the (1oad), (run), (start), and (stop) methods. The parameters passed by the NOS to the SM object instance include a logical reference to the desired service and a Boolean flag indicating whether the NOS should register the service with the NGIN Local Resource Manager (LRM) site locator or whether the service is responsible for registering itself with the NOS. As indicated at step 620, a request to load a service is first received, and then processed by the proxy naming service at step 622. Then, at step 624, a determination is made as to whether the requested service, e.g., 1-800 gather ("18C"), has been loaded, that is, whether an object instance embodying the requested service is to be implemented. If an object for the requested service has been instantiated, the NOS will return an object reference for the service to locate the physical object instance at step 626, and processing returns to step 632. If a service object, e.g., 18C, for the requested service has not been instantiated, Classloader class instantiation is implemented at step 625, which implements recursive loading to load all classes on which the requested service depends, including other SLPs and SIBB. Recursive loading is possible, for example, by referring to a local configuration file in the local cache. In particular, a flag is transmitted that indicates whether the class loader has recursively loaded all of these dependent classes into the JVM. When loading a class for a service in the first instance, it should be understood that a generic service agent class may be loaded if it is not already loaded. Then, after all classes are loaded in step 625, the Boolean registration flag is checked in step 628 to determine that the service must register itself with the native NOS naming service (proxy). If the Boolean flag has been set, e.g., to true, then the service is responsible for registering with the NOS naming service, as indicated at step 630. Otherwise, processing continues to step 632 where a Sag class instantiation is implemented and an association is established between the service proxy object instance (FIG. 11) and the particular service, such as by passing the SLP object into the service proxy instance. Then, at step 635, a new SLEE thread is spawned in the manner to be described, and the SLEE thread is invoked to run the service agent, i.e., associate the SLEE thread with the service agent. Finally, the SM process is skipped and the process returns to slee. It is additionally responsible for monitoring all other services running within its SLEE and reporting status and usage data to the NOS through the methods provided in the SM.
Additionally, for SM processing, the (sleeclass loader. java) call is explained in detail with reference to fig. 12 (c). In particular, the SLEEClassLoader class is a class that is specific to and extends the ClassLoader class of JVM. It extends the behavior of the system class loader by allowing classes to be loaded onto the network. Thus, as a first step 686 of FIG. 12(c), the class loader first checks its local cache associated with the instance of the SLEE to see if the class has been loaded and defined. If the class is already loaded, processing returns. If the class is not already loaded, a message is sent by the NOS to check a local data store (DM) to determine if the class is available for loading in step 688. For example, the SLEEClassLoader may use JDBC database connectivity to retrieve classes from a relational database, however, it should be understood that classes may be retrieved from any relational database that supports JDBCAPI. If the service class is not found in the local data store, the SLEEClassLoader checks a local file system at step 689. If the class is found in either the data store or the local file system, the class is fetched, which is indicated at step 690. Then, at step 694, a define class method is called to make the class available to the JVM execution environment. In particular, the (defiececlass) method recursively goes through each Class specified for executing the service and transforms an array of bytes into an instance of Class. The newInstance method in Class can then be used to generate an instance of this newly defined Class. This functionality allows SLEEs to load and implement new service instances while maintaining their generality. Preferably, as indicated at step 695, a method is invoked to store in the local cache so that the next time a class is loaded there can be a cache hit.
In the preferred embodiment, each of these instantiated objects registers itself with a NOS locator service, such as LRM577, according to a naming convention that is generally instantiated with the following string:
… field level SLEE number SLP name …
Here, the venue level is information about the physical location of the NGIN service control server 440; the SLEE number is a specific SLEE in which the instance of the object is implemented, e.g., SLEE # 1; the SLP name is a logical name of the service, for example, feature division # 1. The string may also include a "version number". The registration name is transmitted to other locator sites in the NGIN domain; through the registration process and NOS resource management functions (described below), the NOS component knows which processes have been deployed, where they are deployed, and where services are currently available.
The method and constructor of objects produced by the class loader may reference other classes. To determine the class mentioned, the Java virtual machine calls the class loader to initially generate the loadClass method for the class. If the Java virtual machine only needs to decide if the class exists and knows its super class if it does exist, then the "resolve" flag is set to false. However, if an instance of the class is being generated or any of its methods are being invoked, the class must also be parsed. In this case, the parse flag is set to true and the resolveClass method is invoked. This function ensures that classes/SIBBs/JavaBeans referenced by the service are also resolved by the sleeclassloaders.
FIG. 12(d) shows the service broker class process flow when instantiated. As shown at step 639, the first step includes implementing an instance of a thread manager ("TM") object associated with the service agent and described as a TM object instance 557 in FIG. 11. As will be explained below, the thread manager object is based on a (thread manager) class that can be instantiated to behave like a thread factory, creating a new SLEE thread for each service request, or a thread registry, which is desirable when running on machines with high thread creation latency. Next, at step 640, the SA associated with the service enters a process event loop through its (run) class method and is now ready to receive a call event associated with a service.
Referring to fig. 12(e), there is shown the details of the ServiceAgent class, which provides the gate to the NGIN service through its (begin), (continue) and (end) class methods. Each service within the SLEE has an associated ServiceAgent object based on a class that is responsible for managing service instances (call instances) and assigning events to service instances. As shown in fig. 12(e), after a Sag object instance and execution is implemented by the service manager (1oad) method, the (begin) method of Sag is invoked each time a new call is received requesting the service. In particular, as indicated in fig. 12(e), at step 641, the tid, orid identifier parameters and a message stream containing event information regarding service handling for the call, such as provided by an initial address message ("IAM") from an IDNA/NGIN switch, referred to herein as a new generation switch ("NGS"), are first passed to the Sag start method. Then, in step 643, the decisive message relating to the service instance is executed, for example by calling a (decode) method to decode the message stream. In addition, a call context object instance for managing call context data is generated to receive executed message information. In the start method, a new thread is allocated for the call by calling the allocation method of the ThreadManager instance, which is explained here with reference to fig. 12(g), or a thread is pulled from the thread pool if several thread instances have been implemented for the service in advance, as indicated at step 645. Otherwise, if the sag (continue) method is called, an object reference corresponding to the thread assigned for the call is returned.
More specifically, the thread manager object is based on the ThreadManager class, which preferably manages threads according to the session-id. Two methods are provided for allocating and releasing threads, (allocate) and (release), respectively. Both allocation and release expect a unique identifier as a key that can be used for thread identification. The unique identifier includes a transaction ID ("Tid"), which is set by the NGS receiving the call, and an object reference ID ("Orid") that identifies the call originator and that identifies a call instance. Fig. 12(f) shows the operation details of the thread manager class (allocation) method. As shown in fig. 12(f), at step 660, Tid and Orid identifiers that uniquely identify the call transaction are passed to the process and a unique key is generated based on the identifiers. Then, at step 662, a query is made as to whether the key identifier exists in the thread, e.g., by checking a hash table of key-value pairs. If a key is identified, meaning that a service thread has been assigned for the call, then at step 664 the thread manager returns a sleepthread instance (thread object) after consulting the hash table. Otherwise, at step 663 a counter that tracks the number of service threads instantiated is incremented, and in an effort to monitor system load, a determination is made at step 665 whether the maximum value for the thread instance for the service has been exceeded. If the maximum value of the thread instance for the service has been exceeded, for example in comparing the counter value with the maximum service instance value found in the service profile, then a message is issued to the NOS in step 667 enabling them to find another instance for the service, which may be available for example in another SLEE executing in the same site or instantiated at another service node location, while processing returns. Also instantiated to the sleepthread is the initialization of its priorityentqueue, which is described in detail herein with reference to fig. 12 (g). If the maximum number of thread instances for the service has not been exceeded, then at step 668, a determination is made whether some threshold for thread instances for the service has been exceeded. If a threshold for the thread instance for the service has been exceeded, then at step 669 an alert is issued to the NOS local resource management function stating that the service threshold has been reached. Finally, at step 670, a new sleepthread instance is allocated for the requested service, a priority event queue is initialized for the requested service, the thread is started, and control returns to the Sag instance for the service, regardless of the output at step 668.
Returning to the service broker (begin) method functionality shown in fig. 12(e), after the thread manager allocates a thread to the service instance, the object variables associated with the thread are initialized at step 646 and a new object instance of the requested service is implemented by querying a (clone) method. Then, at step 648, the newly copied SLP instance is set into the newly allocated thread. Then, at step 650, it is determined that there is an event message that needs to be associated with the call instance, such as all IAM information extracted from the incoming message stream. If there is event information associated with the newly copied SLP instance, it is pushed to the thread, as shown at step 652. Starting a newly allocated thread for the SLP, regardless of whether there is event information to be pushed to the thread, waiting for asynchronous arrival of service-related event information, which is processed by the sa (consistency) method. As previously described, the sleepthread assigned to the call maintains a priority event queue to hold all event information received during processing relating to the service. All events relating to service handling have an associated priority and a thread will manage the handling of event information according to its priority, i.e. its position in the service event queue. Finally, at step 654, a thread event loop is started for the call instance.
It should be understood that the sa (continue) method is basically the same as the (begin) method shown in fig. 12(e), with the difference that the sa (continue) method is directed to selecting channels for real-time service-related events for the service processing threads that have been instantiated for the call instance, as already explained with reference to fig. 12 (e). Thus, the continuing method of the service agent receives the event and the call instance's identification parameters, reassigns the service thread associated with the tid, orid parameters for the received event, and pushes the event into the thread's event priority queue. It should be understood that both the SAg and SM classes include an IDL interface to NOS. A Service (SLP) does not have such an interface, but is able to communicate system-wide over its Sag interface. During real-time service processing, SLEE450 can perform the following: 1) interpreting instructions at SLP and SIBB levels during service processing; 2) events delivered to a specified instance of SLP; 3) generating trace data if the trace flag is set; 4) allowing tracking to be performed in alignment with the SLP, SIBB, and SLEE stages and sending tracking data to a designated output; 5) generating SLEE usage data and sending run-time usage data to a designated output; 6) generating exception data (errors) for a Telecommunications Management Network (TMN) instance; 7) generating performance data for the TMN interface; 8) receiving a message/request for a new instance or utility for adding an SLP and adding such new SLP or utility instance without interrupting and degrading service processing; 9) the same service is supported by increasing service control instances for load sharing.
When a service instance completes a transaction, it either initiates termination of the service or initiates another transaction in the communication as intended by the system. At any event, the sag (end) method is invoked, which functions to terminate the thread instance associated with the call. This is accomplished by invoking a threadmanager (release) method, passing Tid and Orid that uniquely identify the call instance, pushing any events to the thread's event queue, and releasing the call, i.e., terminating the thread instance and/or putting the thread's benefit back into the thread pool.
Preferably, the sleepthread class instance provides the functionality required for IDNA/NGIN services to execute in parallel without binding all SLEE resources and facilitating cooperative resource sharing. Specifically, there is a one-to-one mapping between sleepthread and service instances, while the SLEE associates an instance of sleepthread with an instance of a service, i.e., there is one sleepthread instance associated with each call handled by a service. Sleepthread also handles transaction id (tid), object reference id (orid), object references, e.g., both peers and proxies, SLP, and a priority event queue associated with the SLP by accommodating the transaction id (tid), object reference id (orid). More specifically, a sleepthread acts as an event channel between a Service (SLP) and a ServiceAgent by implementing two key interfaces: PushConsumer for making the ServiceAgent push the event on SleeThread; and pullsupplies that enable services to pull events from their associated threads. As will be explained below, each sleepthread has an instance of priority eventqueue, which is a queuing ngineevebts, in the manner described.
Preferably, the (PriorityEventQueue) class is a platform independent class that queues events (NGINEvent's derived class) associated with a Service (SLP). As shown with reference to steps 667, 670 of fig. 12(f), each sleepthread object implements an instance of priority eventqueue, which may include a hash table of events. Events may be queued in descending order, e.g., event priority is defined in the NGINEvent base class and anywhere in the range from 10 to 1, e.g., 10 is the highest priority. In this way, each thread can track the number of events available/unavailable for processing, allowing full service processing parallelism.
FIG. 12(g) shows a (postEvent) method that incorporates logic to determine the priority of an event being received by a thread, as indicated at step 675, and to register the event in PriorityEventQueue. As shown in fig. 12(g), this is basically accomplished by comparing the priority of the pushed event with the priority of the next event in the priority queue to be processed at step 678, deciding whether the priority of the pushed event is greater than the priority of the next event in the queue to be processed at step 680 (if any), and either placing the pushed event at the top of the queue to set it as the next event to be processed, as indicated at step 682a, or looping through the queue and deciding where the event should be stored in the queue according to its priority, as indicated at step 682 b. Then, at step 684, sleepthread processes the next event with the highest priority when it has allocated processing time from the system.
More particularly, a pullprovider interface is implemented via sleepthread to support requesting data from a supplier for an operation by a user, by either invoking a "Pull" operation that is blocked until event data is available, or by an exception occurring and returning event data to the user; or to invoke an unblocked "TryPull" operation. That is, if event data is available, it returns the event data and sets the hasEvent parameter to true; if the event is not available, it sets the hasEvent parameter to false and returns a null value. Thus, sleepthread can act as an event provider, while the Service (SLP) assumes the role of a user. The Service (SLP) uses SleeThread Pull or tryPull to fetch event data from SleeThread. The service either uses Pull operations if it cannot continue without event data, otherwise it uses tryPull operations.
pushConsumer interface is implemented by SleeThread and implements a generic pushConsumer interface that supports operations for providers to communicate event data to users by invoking push operations to threads and passing the event data as parameters to the priority event queue of the thread. Thus, SleeThread acts as an event user, and the ServiceAgent takes the role of a provider. The ServiceAgent pushes the communication event data to the sleepthread using the sleepthread. A "kill" service event may include the highest priority. The priority for an event may be a default or may be established at the service creation site when designing a newly created event class.
As described above, a service agent instance for a particular service directs all events received and generated during service processing to and from the service thread instance established for the call. For example, the initiation event generated by a switch at a node may comprise a (ServiceRequestEvent) class responsible for communicating an initiation service request, and in particular initiation call context information, for IDNA/NGIN service control, such as: the time at which the service request was initiated; request the switch ID originating from it; port ID of call origination; the terminal device ID of the call origination; a calling party number; called party number, etc. A (connectinceevent) sub-category of extended ngineevent may report when a connection occurs, the station number to which the calling number is connected; and reporting the incoming virtual path ID and the outgoing virtual path ID in the scope of the ATM-VNET service. A (releaseEvent) subclass of the extended ngineevent may report the release event. For example, in the context of ATM-VNET services, a release may be caused when a calling or called party terminates a call, or when a user's credit runs out. Such a category can be decided by implementing the SIBB: a time at which a release event occurs; causing the generation of a release event and the time elapsed from the connection of the calling and called parties to the generation of the release event. Additionally to this, a termination message from the NGIN to the NGS may be expressed using the (connectivent) subclass of the extended ngineevent. Upon receiving this message, the switch may initiate the tear down connection process. A (monitoreleaseevent) subclass extends the ngineevent and is used to send messages to the NGS instructing the NGS to pass a release indication to the NGIN when it receives it. When the NGS receives a watch release message, the (UniNotifyEvent) subclass may be invoked to send a notification to the originator (caller). The (monitorennectevent) subclass extends ngineevent, which is a subclass used to send messages from the NGIN to the NGS, instructing the NGS to send an event to the NGIN upon receiving a connect message.
As described above, in the scope of real-time service processing, the data retrieval and update functions of data management include the ability to access stored data by a DM during service processing.
In the preferred embodiment, at any particular service node, the DM receives a data request from executing a managed object instance in the SLEE during service processing, for example, via the NOS. It notifies the requestor (e.g., managed object) specifically if the data management cannot understand the data request. If the data request is to retrieve a data entity, the data management returns the requested data to the requestor (e.g., via NOS). It should be understood that any support required for manipulating and querying data in a single registry or in multiple registries is provided by the DM. Data management additionally supports the collection and correlation of query results across multiple registries. If the DM is unable to locate the name of the requested entity in the data retrieval request, the DM notifies the NOS component. The NOS component will also be notified if an error occurs during the retrieval of a data entity. The data management additionally informs the requester (the object performing service control) that a particular data entity cannot be retrieved from a valid name. If the data request is to update a data entity, the data management updates the data entity and decides whether an answer is needed. If DM is unable to update the data entity specified in a data request, it notifies the requester, and if it is unable to locate the name of the entity requested in the data update request, it additionally notifies NOS. At any time during the running of the NGIN, the DM notifies the NOS of a database error during the updating of a data entity. If the data request is to delete a data entity, the DM deletes the data entity and determines whether the transaction needs to be initiated in another registry.
Fig. 4(c) shows generally the functional structure of the data management section 400, which includes: a service control server component 405 for making call service data available at the service node for real-time call processing; and a database component 407 implemented as a separate database server for storing and distributing a selected subset of the data maintained by the SA. Specifically, the service control server section 405 includes a Data Management (DM) client 409, which is an actual data management application; a DM API 410 which is connected with the DM application and is an interface for the DM application to obtain data from the SA; a local cache 415, which is a shared memory on the service control server, for storing some or all of the data from the DBOR fetcher, which may be used for call processing according to a local caching policy; and a cache manager 420 that maintains the state of the local cache by implementing a local cache policy and retrieves data from the DBOR extractor in communication with the DM server. The database component 407 includes a DBOR extractor consisting of one or more databases having data to be used by managed object instances during service execution at the node; a DBOR extractor manager 426 for managing a selected subset of the information maintained by the SA; SA client 422, which inputs data from service management to DBOR extractor manager 426; a DDAPI424, which is a processing interface between the SA client 422 and the data distribution processing of the SA; and a data management server 425, which generally extracts data clips from the DBOR extractor manager 426.
The data management operation will now be described in further detail with reference to fig. 4(c) and 8. Within a SLEE, there are several classes of functions that can require data from data management 400, including but not limited to managed objects (SIBB, SLP, etc.) and NOS. Each of which is represented in fig. 4(c) as a DM client executing within the service control SLEE. When the DM API 412 provides a common set of messages for all DM clients to interface with data management, the DM client 410 requests data using the DM API. The DM API 412 also hosts the specific location from the DM client where the data is needed, as this data may be stored in a local cache 415 or just in the DBOR extractor 427. The DM client 410 requests data by logical name and the DM API 412 decides whether the data can be retrieved from the local cache if it needs to be requested from the DBOR extractor by the DM server. Preferably, local cache 415 is a shared cache available to each process running on each SLEE provided within control server 405, i.e., there may be one or more local caches provided for different applications, e.g., 1-800 process caches, routing manager caches, etc., each shared cache having its own respective cache manager.
When the DM client 410 requests data, the DM API first checks the local cache 415 to see if the requested data is stored there. If the requested data is stored in the local cache 415, the DM API retrieves the requested data and provides it to the DM client 410 using any standard data retrieval technique, such as hash keys and algorithms, or an indexed sequential access method.
If the requested data is not stored in the local cache 415, the associated cache manager 420 retrieves the data from the DBOR extractor 427 through the DM server 425. In particular, the DM API 412 informs the cache manager 420 that it needs certain data and that the cache manager responds by sending a request to the DM server 425. The DM server 425, in turn, retrieves the requested data from the DBOR extractor using the DBOR extractor manager 426 for database access. The DM server 425 sends the retrieved data back to the cache manager 420, which provides the data to the DM client 610 through the DM API 412. The cache manager may also write the requested data to the local cache 415 according to a local cache policy that depends on both the service command and the capabilities, particularly memory capacity, of the computers on which they are running. These performance specifications are obtained from the service and the computer profile generated by the service management. Preferably, the data cache manager component of DM400, which is IDNA/NGIN, uses a client-side caching policy at each service node.
The IDNA/NGIN network operating system ("NOS") component 700 is now described in detail with reference to FIGS. 8-10. As described above, NOS functions include inter-process communications, object connectivity, and local and network-wide resource management functions that are enabled for the IDNA/NGIN system 170. Because all NGIN processes execute on a wide variety of hardware and operating system platforms over a wide distribution architecture, NOS provides platform-independent and location-independent communications between all processes. In particular, the NOS includes several functional subcomponents to provide an interface between all the NGIN processes, including an interface between service control, service management, and data management. The NOS is also the interface between call control and service control (fig. 5), and enables two or more processes running on the same SLEE to communicate with each other.
As shown in fig. 8-10, the NOS functional subcomponents include: 1) name translation ("NT") process 570, which resolves logical names for data and service objects to physical addresses that identify the computer (as a network address) and memory addresses within which the requested object operates; 2) local resource management ("LRM") processes 575, 577 that track and maintain the state of resources executing at the SLEE and at the service node; 3) an integrated network resource status ("NRS") process 590 that maintains the status of all service node resources throughout the NGIN network and provides interprocess communication; 4) to provide a set of services for object connectivity, such as provided by an ORB compliant with the Common Object Request Broker Architecture (CORBA), it enables communication between objects on different computer platforms, API message sets, and Internet Protocol (IP) communications in such a way as to meet or exceed certain real-time call processing performance requirements. For example, a typical response time for processing a typical number 1-800 "collective call" event should be approximately 50 to 100 milliseconds.
As described herein, the NOS component 700 can be implemented for object connectivity using a CORBA compliant ORB, such as that provided by Orbix , Orbix  developed by iana technologies of cambridge, massachusetts, and urbellin, ireland. The ORB provides communication between objects on different computer platforms through a name service that can map logical names to physical addresses.
At system boot time, SLEE450 starts and publishes an instance of N0S client component 558 and service manager processing component 554 in its environment. The SM SLP 554 retrieves the logical name for the other components from the configuration file 580 including the logical name of the service to be instantiated immediately. It then provides the ORB name service with the logical name, which maps the logical name to a physical address. From this point, the ORB maintains the connectivity of the service objects. The ORB name service is also used for other service registrations. Each service launched on SLEE registers itself with NOS, and it is through this registration ORB that the physical address of the logical name is identified.
To enable platform-independent communication between interactive objects, an interface is defined, which is made possible by an interface definition language ("IDL"). CORBA currently supports IDL, however, other object-oriented communication technologies, such as Remote Method Invocation (RMI) protocols, may be implemented as long as the performance requirements are suitable for real-time call processing. In particular, the interfaces for each NGIN component are defined at their setup time and made available at runtime by storing them in a persistent data store or library (not shown) associated with the local LRM575, as shown in fig. 9. Allowing the service to query the library for new object interfaces. The NOS client process 558 and NOS host process 560 (FIG. 8) are a NOS class library that is used to interface with NOS services and is used by all services running within the SLEE to invoke NOS NT and LRM services, which are described in further detail herein.
Referring now to fig. 9, there is shown the functional structure of the NOS NT subcomponent 570 and LRM functional subcomponent 575, both of which reside on a computer executing one or more SLEEs 450 and 450', with NT and LRM subcomponents associated with each SLEE. Fig. 9 particularly shows an example of a single NGIN service node or "site" 45 having at least two computing systems 440 and 440 ', respectively implementing SLEE components 450 and 450 ' and respective NOS components 700 and 700 ', each of which includes a respective NT function sub-component 570 and 570 ', and a respective LRM function sub-component 575 and 575 '. Although a single SLEE is shown executing on a single computer, it should be understood that two or more SLEEs can run on the same computer at a single site. Running on each SLEE450, 450' are several service objects or processes labeled S1, …, S4, which may be SLP, LLP, CLP, ELP, constantly running FD logic and NOS client objects 558, or other processes.
As described herein, each NOS NT function subcomponent 570, 570' includes a process for identifying the correct version of a data or service object to be used, and the optimal instance of that object to be used, particularly by allowing a process to invoke any other process, using a single common logical name that remains unchanged across different versions and instances of the process being called. Thus, the NOS NT component 570 hosts object references, versions, and physical locations from the processed instances.
As illustrated herein, each local resource manager ("LRM") component 575, 575' of NOS700 at each service node determines which service executes which SLEE at a node, each configuration rule contained in a service profile (configuration) file 580, which may include the contents of a service profile for storing deployments in a local LRM cache from the SA. The LRM first reads this service profile 580 stored in the local cache at the node and decides which particular SLEE runs a service according to the rules in the service profile and which service is actively running in the SLEE (as persistent objects), or instantiated only when needed.
In the preferred embodiment, LRM575 allows runtime configuration and optimization of service execution by tracking the health and status of each service control resource. In particular, each LRM function subcomponent maintains a table of all services programmed to run on the SLEE, which service processes (object references) are actively running on a SLEE and the current load state (processing power) of the SLEE at that node according to predetermined thresholds.
More particularly, SLEE (server) LRM component 575 is a set of libraries built into a local cache of object references corresponding to each object (logical program) in the system, including information about the server, such as IP address and port number, enabling communication. New objects register with NOS when they become available within the system, i.e., an object reference is generated for them, to register in the local cache through data management.
After querying its service profile (configuration) file 580 to determine which service to instantiate immediately, the NOS LRM component 575 sends a service activation request from the NOS NT 570 to the active service manager object 554 in the SLEE via the NOS client interface also executing in the SLEE 450. The SM object 554 is an API object that allows control of the SLEE service. For example, it provides the ability to implement a new service instance when a request for a service that is inactive is received. That is, it can assign a processing thread to the object when it is instantiated, and the service then registers itself with NOS via LRM 575. When one service is invoked by another service using its logical name, the LRM uses rules in the configuration file to decide which instance to invoke, by mapping the logical name to the physical address of the active instance using the ORB name service.
As shown in FIG. 9, associated with an NGIN venue or service node 45 is a NOS component 700 "running on a single computer 440", or a venue LRM577 running on a shared computer such as computer 440 or computer 440'. The functions of the venue LRM577 are: 1) tracking the availability of services at each SLEE as a function of the current load of all processes running on each SLEE; 2) a resource state table is maintained that is a copy of the active updates of each individual SLEE LRM575, and additionally has a SLEE identifier for each resource. The venue LRM subcomponent 577 decides which of the requested services should be used based on any of several criteria, including but not limited to: 1) proximity of the called service instance to the calling service instance (same for different SLEEs, same for different sites); 2) proximity of the called service instance to data management data required by the called service; 3) current system and processing load.
As an example, whenever a process, e.g. S1 in SLEE1, needs to implement an instance of an SLP, S4, to perform a particular process, e.g. a Vnet service, is shown in fig. 9, NOS first determines whether the service, i.e. its object references, are available in the local cache, e.g. in SLEE 1. If the local LRM575 does not have the requested object reference, the NOS looks for a location level LRM577 to determine the location of the particular object reference corresponding to the requested service. For example, as shown in fig. 9, the object can be found in SLEE2, and when found NOS makes the service available by implementing an instance of the object if SLEE has the ability to do so, i.e., has not reached its usage threshold.
As further shown in fig. 10, in addition to LRMs 575 for each SLEE and 577 for each site, the NOS component 700 additionally includes a network resource status ("NRS") subcomponent 590, which is a process that performs network-wide resource management functions. In particular, the NRS includes a subset of the data maintained by each venue LRM for each venue LRM in the network, such as venues 577a, …, 577c corresponding to respective venues 440a, …, 440c in FIG. 10. NRS590 comprises: 1) a SLEE table; 2) which type of service is programmed to run on each SLEE; and 3) which services are actively running on each SLEE, i.e., the SLEE's current load is on a one percent basis. This NRS subcomponent 590 is a logically centralized function that gives the NOS another level of propagation of requests that cannot be satisfied by the venues LRM577a, …, 577 c. In addition, NRS subcomponent 590 includes a binary indicator for each SLEE450 to indicate whether the SLEE is up or down, and to indicate whether a service usage threshold is met by the SLEE. The "up" or "down" indicators and usage threshold applications are used to decide whether a SLEE is available to accept service requests from other services, and the NRS subcomponent can simply provide binary indicators that indicate whether a SLEE is available given these indicators and threshold applications. As an example, if a requesting SLP object is found in a SLEE, but the SLEE does not have the capability to implement the instance of requested processing, it will send a notification to the site LRM577 that the usage threshold for the SLEE has been reached and that no further requests can be processed for the service. This information will also be passed to the NRS block 590 (fig. 10).
Referring back to fig. 8, the NGIN system implements a monitoring mechanism 595 to monitor memory capacity, database capacity, length of object requested by the query, amount of time in queue, and other resource/load parameters for each SLEE in the system. There are 3 factors available to NOS700, which determine SLEE usage thresholds based on one or more of these factors. Outside of the fixed threshold, multiple thresholds may be used for hysteresis.
An illustrative example of resource management functions performed by a NOS, including NT, LRM, and NRS, which enable NOS700 to provide location and platform independent processing while optimizing the overall processing power of the NGIN, will now be described in detail with reference to FIGS. 11(a) -11 (b). In LRM process flow diagram 801 described with reference to fig. 11(a) and 11(b), assume that service S1 executing on SLEE1 on one service control server needs to invoke service S2, as shown at step 802. The service S1 may be an FD or service logic program that receives an event service request from the switch fabric call control and needs to invoke another SLP, S2, for example to complete call processing.
Specifically, referring to fig. 11(a), the service S1 issues a request to the NOS700 using the logical name for SLP S2. Upon receiving the SLP request as a service object, the NOS name translation function 570a is implemented, which indicates at step 804, for determining whether the NOS recognizes the requested service as an actively running service on the local service control server, i.e. having an object reference associated with the logical name of the requested service. Preferably, the data stored in the local server cache includes the following NOS naming data fields: 1) SLP logical service name, which is generally the name to which the logical name and feature specifier data specifying the service point; 2) an optional version number, which indicates the version of a particular service, which may be required, for example, by a particular user or a node that needs the version of the service to run; 3) a state comprising: deployed, i.e., when the SA has deployed the work package to the node, but the service has not yet been activated; active, i.e., indicating that the service is currently active; or rollback when it wishes to rollback to a previous version of the service object, e.g., to provide a quick rollback; 4) an object name or reference, which may include an IP address, port, and other information identifying the physical location of the object instance; 5) data and time in service and data and time out of service; 6) error handling object names, e.g., if the object is not available or cannot be activated; 7) a fallback object name to be executed when in a fallback state. The local server NOS naming process benefits from services provided by an LRM state handler (not shown) that updates the local server cache state database using only the currently active services running in a particular SLEE in the service control server. This allows the local server NOS name translation function to be performed locally first. When the NOS first gets a request, it refers to a logical name to obtain the object name (or object reference). The NOS derives the object name from the logical name, and the node LRM process decides the best instance of the requested object according to one or more rules, which are indicated at step 806.
If a logical name is identified and the logical object is available at step 804, processing proceeds to the LRM function of step 806 to decide on an active ("available") instance of S2 running on SLEE1, according to certain criteria, such as a usage threshold. If no active instance is found, the LRM can check to see if programming S2 is running on SLEE1, but has not yet been instantiated. If so, NOS700 can decide to implement an instance of S2 on SLEE1 if SLEE1 has sufficient available capacity. As previously mentioned, the LRM at the server level only knows what is active on the server and what has been instantiated. If the object is currently active and instantiated at the local server level, an object reference instance for a new thread implemented as the service is returned to the SLP request. The NOS will initiate an instance of a new service thread based on the returned object reference to execute the requested service and return an object reference if not already instantiated.
If it is determined at step 804 that SLEE1 does not have sufficient available capacity, or if S2 is not already running on SLEE1, the LRM on SLEE1 sends a service request to site LRM577a (fig. 10). The venue LRM applies similar business rules and decides whether an instance of S2 is active on another SLEE of the venue or should be implemented. Thus, at step 810, the node NOS name translation function is implemented to determine whether the requested logical name is available at the node, i.e., whether another SLEE, on the same or a different local service control server of the node, maintains an object reference associated with the logical name of the requested service. If the logical service name is identified at step 810, the NT subcomponent 570 queries the NOS LRM575 to determine which instance of S2 to use. The node LRM then applies the business rules to a node cache state database (not shown) in step 814 to retrieve the desired object reference for the requested service, and if active, returns the address to the calling SLP (step 802, fig. 11 (a)). If it is determined that the service cannot currently be instantiated, or that the service required at a particular SLEE may not be instantiated due to processing load or other imposed limitations, then at step 818 an allocate and load process is performed that implements the requested service instance within the SLEE where it is determined that the service object can be instantiated by examining the node cache state database and implementing applicable business rules related to, for example, service proximity, data proximity, imposed thresholds, current processing load, etc., and returning an address to the calling SLP. It should be appreciated that a round-robin scheme may be implemented in deciding which service thread to instantiate when more than one service is available to implement each SLEE instance.
Returning to FIG. 11(a), if it is determined at step 810 that the current node cannot identify the requested logical name, i.e., the node cache does not have an object reference associated with the logical name of the requested service, or the object instance cannot be implemented at that node due to the applied business rules, then at step 822, integrated Network Resource Status (NRS) process 590 is queried to check the current status of the SLEE across intelligent network 170 and to determine the SLEE that can be processed as the service request of S2. Before this, a check is made to determine if the number of indices representing the number of times network resource management is queried to find an object reference exceeds a predetermined limit, e.g., 3, as indicated at step 820. If the threshold has been exceeded, the process terminates and the administrator may notify that the service object cannot be found and that an error condition exists, as shown at step 821. If the NRS query threshold has not been exceeded, the NRS process decides which service node in the network can perform the requested service, as shown at step 822. After deciding on a node in the intelligent network, as indicated at step 822, processing continues to step 824, FIG. 11(b), where the NodeNOS name translation function 570b is implemented to obtain an object reference associated with the logical name of the requested service. If the logical service name at the node cannot be identified at step 824, then at step 829 the NRS queries the index number plus one and the process returns to step 820, FIG. 11(a), checking if the index number threshold is exceeded and if so, an error condition exists. If at step 820, FIG. 11(a), the NRS query index has not exceeded its predetermined threshold, then the NRS process 590 is queried again at step 822 to find a new location of available services at another service node.
If the logical name is identified at step 824, processing continues at step 826 with determining an address associated with the requested object reference based on the acceptable processing load. The address is then returned to the requesting SLP, shown at step 802, fig. 11 (a). If it is determined in step 826 that the service cannot currently be instantiated (active), then the process proceeds to step 828, which allows an allocate and load process to implement the requested service instance in the SLEE where it is determined that the service object can be used for instantiation by checking the node cache state database 768 at the node, implementing the business rules. The address of the implementation object SLP instance is then returned to the requesting client in step 824.
Once an active instance of S2 is selected, the object references for that S2 instance are returned to NT on SLEE1 (step 802). The NT then effectively converts the logical name S2 to an object identifier that is a selected instance of S2, and uses the object identifier of S2 in inter-process communications between S1 and S2. The object identifier includes an IP address, port, and other information identifying the physical location of the object instance. Once an object reference is decided, the NOS provides object connectivity between two services by implementing an ORB compliant with CORBA and removing data communication connections such as UDP/IP protocols. The location of the called service, whether running on the same SLEE or another SLEE at another location thousands of miles away, is completely transparent to the calling service. Thus, if the SLP required to service a call is instantiated on a SLEE at a remote site, the call is still maintained at the switch receiving the call. Preferably, once an object reference is accessed once, for example, at another site through the NRS level, the NOS, through service management, ensures that the object reference is cached to the requesting site for future reference and vetted. Thus, in the present example, to reduce subsequent lookups caused by initiating a site LRM lookup when the service is needed again, the object reference for service S2 is cached in the local cache of LRM575 at SLEE1 each time it is located. It will be apparent to those skilled in the art that there are many ways to provide service object reference data on a SLEE. For example, a NOS data replication mechanism can be used to replicate all object references at a site LRM577 to each LRM for each SLEE for that site.
In the context of 1-800 calls ("18C"), an 18C call processing and service usage scenario is now described with reference to the flow diagrams of FIGS. 13(a) -13(C) and the conceptual functional diagram of FIG. 18 for illustrative purposes. First, as shown at step 920, the call first arrives at the NGS switch fabric 75. When a call is received by the NGS, the bearer control component (fig. 5) provides the call control component with an access line on which the call is received, as well as the ANI, dialed number, and other data needed for call processing. The call control component maintains a state model for the call that is executed according to its programmed logic. Also included in the state model are triggers for implementing an ELP540 instance and sending a service request to FD510, which are shown in fig. 18. To implement an ELP instance, the NGS call control component 90 sends a message to the NOS using a logical name that is an ELP, which is indicated at step 923 in fig. 13 (a). The NOS, in response, sends a message to the service manager object (fig. 8) implementing an ELP instance in a SLEE and returning an object reference for the ELP to the call control, as indicated at step 926. The NGS call control element includes the object reference in a service request message that is sent to the FD in SLEE, which is indicated at step 929. In this way, all qualified event data generated by any process for the call is written into the instantiated ELP process. Specifically, the service request message is sent to the logical name that is the FD; the logical name is translated by the NOS NT component to a physical address that is an FD logical program running on the same service node that received the call. Included in the service request message are the dialed number, ANI, and other data.
Next, the FD uses its feature differentiation table to identify which SLP processes the received service request, as shown in step 931. For example, if the received message is an 18C service request, it is processed by an 18C SLP. Table 3 below is an example of a excerpted FD table with entries including pointers to various "free" calls, e.g., 1-800, services.
Inlet port meter
'001001' SLP pointer 'Vnet'
"001002" points to the table pointer of the FGD table
FGD table
1800 table pointer 800 table
1888 table pointer 800 table
1900 table pointer 900 table
1 SLP pointer 'local number'
800 watch
1800 Collection SLP pointers to' 1-800-C
18008888000SLP Point to pointer of' Op service
1800 SLP pointer to' 800 service
1888 SLP pointer to' 800 service
Here, FGD is a feature group discriminator. In particular, depending on where the call originates in the network (switch board) and the type of call received (e.g., 1-800), the FD will determine an appropriate SLP logical name, e.g., the designation "001002" indicates that a call is received that requires consulting the FGD table (pointer to the FGD table). FGD tables maintain pointers to other tables in turn, here a delimiter, depending on the called number, e.g., 800. From the 800 table, e.g., FD, a pointer to the requested SLP logical name is obtained. The SLP is then invoked and the service request is handed over to the NOS, which implements a CLP545, LLPO530 and SLP520 object instance according to the requested 18C service. For example, for an LLPO, NOS is provided with a logical name for the LLPO based on the bearer control line on which the call was received. The identification of the line may be based on the ANI as well as on the access line identified by the bearer control unit 80. The ANI identifies the originating access line from which the call originated, which may or may not be the same access line from which the NGS received the call, i.e., the received call may originate from a local network, e.g., and be routed to switch 75 on a carrier network between switches. Thus, characteristics associated with the line, such as call waiting or call interruption, may be identified by the ANI. The logical name NOS for LLPO is a physical address used to implement an LLPO instance. While other logic programs (such as SLPs) may be instantiated at other sites, LLPs are instantiated at the site where their associated lines are located. The LLP is instantiated within the SLEE, which can be on a service control server or on a call control server. Once the instance is implemented, the LLPO queries the data to manage the features associated with the line, maintains the state of the originating line, and invokes any of these features when invoked by either the caller (i.e., call waiting) or the network (i.e., overflow routing), such as call waiting or overflow routing.
Referring to step 934, fig. 13(a), the NOS receives a service request handoff request from the feature discriminator containing a logical name indicating a particular service to be invoked, e.g., 18C. The NOS recognizes that the request contains a logical name and looks up in its instance table (not shown) to determine if it has any SLP processing available to service the service request. It also identifies, by the NOS LRM function, which instance of the requested type is to be used, as indicated at step 937. The NOS then sends a request to the service manager object running on a service control SLEE to invoke the requested 18C SLP service, as indicated at step 941. In the preferred embodiment, the NOS selects an SLP from a service control server that receives incoming service request notifications from the NGS, however, it should be understood that the NOS may select an SLP within any service control element by implementing the NOS LRM and depending on its service control instance table and their current state. As indicated at step 943, the NOS decides whether the selected SLP has been instantiated and if not, will direct the SM to implement the SLP object instance, as indicated at step 946. Otherwise, if the selected SLP has been instantiated, the thread manager assigns a new processing thread to the SLP object, as shown at step 945.
The next step 949 of fig. 13(b) entails the instantiated SLP process registering its physical address with the NOS and the NOS assigning the SLP to the service request. Then, the NOS transmits a service request handover message to the new SLP, which is indicated in step 951. In parallel, the NOS sends all data to the instantiated CLP, including object references for the instantiated SLP, ELP, and LLPO objects. The LLPO and SLP are also provided with object references for CLP and ELP so that the LLPO and SLP can interface with CLP and ELP. Finally, the SLP then starts processing the call according to its programmed logic, as indicated at step 954.
In the context of an 18C call, the 18C SLP520 preferably obtains the necessary data from an 18C routing database (not shown) to make the appropriate decision. As shown in fig. 13(C), the 18C SLP520 invokes the following steps: sends to the NOS NT a logical name that it needs to be for the 18C routing database in step 960; querying the DM with the logical 18C routing database name and receiving from the DM the actual 18C routing DB name and its stored location, which is indicated at step 962; requesting the NOS LRM to see if the 18C routing database is locally available, as shown at step 964, and if so, returning the physical address of the 18C database to the 18C SLP, shown at step 966; sending a query to the data management regarding user profile review by sending the called 800 number, line ID, originating switch trunk, and ANI, as indicated at step 968; the last routing information including the switch/trunk is received from the DM back to the 18C SLP, as indicated at step 970, requesting the DM to refer to the actual termination location (node) of the termination specified in the routing response and receive the actual termination node location from the DM at step 972. Thereafter, the process entails sending routing response information to the ELP510 for placement in the call context data, e.g., storage in the DM, and sending a dial-out (outdial) request including routing information to the CLP545 using a handover command. In this scenario, the terminating node may be remote, in which case a terminating LLP instance and a profile lookup must be implemented at the remote node to determine any characteristics on the terminating line. In other service flow scenarios, an SLP may have to invoke one or more other SLPs.
Once the SLP decides to terminate the network for a call or otherwise decides to take an action for the resource aggregation device, such as detecting DTMF digits or playing a sound, it sends a service response message to the NGS call control, as indicated at step 957. Call control then executes instructions that may include instructing the NGS switch 75' (fig. 14) to set up and complete the call to a network termination, which is shown at step 959.
More particularly, an outbound/handoff (outdial/handoff) procedure is implemented which requires the CLP545 to send the outbound to the LLPO (originating line) using a handoff command, which is passed to an NOS agent at the call switch which directs the call to the terminating node. The ELP process then writes the outgoing call context data to the DM.
Referring back to step 957, if the SLP returns to the call control a physical address for which the network terminates, then an LLPT process instance 531 is implemented for the line for which the call is terminated. This is achieved by allowing the NOS to associate a logical name for the LLPT with the network termination provided by the SLP; this logical name is provided to the NOS either through the SLP (in one embodiment) or through call control in a service request to the FD (in another embodiment). The NOS in turn implements the LLPT instance in a SLEE at the service node where the terminating line exists.
Alternatively, the SLP may instead return a request for a particular resource at step 957, such as an IVR function in the example of processing an 18C call. The NGS determines which resource is to be allocated, i.e., which switch port has IVR capability, VRU port, etc., and returns an address for the resource to the SLP. The SLP then identifies an address (through queries for data management) and requests to implement the LLPT instance for the LLPT associated with the resource. The call is then directed to the resource and processed, perhaps with another service request to the NGIN.
When the call is completed (i.e., when both parties are disconnected), the LLP receives a call completion notification from the NOS component at each switch 75, 75' (fig. 14) and passes the call completion notification to the CLP. The CLP transmits the call completion notification to the associated LLP and ELP and is killed when triggered by the CLP notification. Prior to its termination, ELP call detail data may first be stored, which needs to be maintained after the call is completed, e.g., for billing and various other purposes.
Several preferred embodiments are described in detail above. It is to be understood that the scope of the invention also includes embodiments that differ from the described embodiments but that are within the scope of the claims.
For example, a general purpose computer is understood to be a computing device that is not specifically manufactured for a class of applications. A general purpose computer may be any computing device of any size that can perform the functions required to implement the present invention.
An additional example is that the "Java" programming language may be replaced with other equivalent programming languages having similar features and performing similar functions as required to implement the present invention.
The use of these terms and others herein is not meant to limit the invention to these terms alone. The terms used may be interchanged with other terms that are synonymous and/or mean equivalent. The inclusion of words should be construed as non-exhaustive when considering the scope of the present invention. It is also to be understood that various embodiments of the invention are implemented using or with hardware, software, or microcode firmware.
While the invention has been disclosed and discussed in connection with the above embodiments, it will be apparent to those skilled in the art that numerous changes, variations and modifications can be made within the spirit and scope of the invention. It is therefore intended that such changes and modifications be covered by the following claims.

Claims (16)

1. A control system for a communications network having a plurality of service nodes, each node having a memory storage device and an execution environment for executing services in response to receiving an event at a network switch element associated with a service node, said system comprising:
a service manager device for generating a service profile at each node, the service profile including types and amounts of service object resources associated with service processes at each node, and downloading said types and amounts of service object resources to said node in accordance with said profile;
an instantiation mechanism to implement a service object instance to execute in the one or more execution environments; and
resource management apparatus for tracking execution environment resources of a service node and maintaining a table of service types available to each service node within the network, each service type having an associated capability status indicating whether a requested service is available for instantiation at a service node, wherein when the capability status indicates that a requested service is not available for instantiation in the network, the resource management apparatus communicates to the central manager apparatus indicating that a new service object instance needs to be implemented to download and activate a new service at a service node.
2. The system of claim 1, wherein the instantiation mechanism comprises:
a first object for loading one or more service objects from said memory storage system and implementing said one or more object instances for execution in said execution environment;
a second object corresponding to a particular service for assigning one or more service threads to each service instance corresponding to each received said service request, each service thread instance having a unique identifier associated therewith.
3. The system of claim 2, further comprising a network operating system for providing real-time communication of messages and events between executing object instances, said second object corresponding to a particular service providing a channel for events and messages between said service instances, said events and messages including said unique identifier to coordinate received messages and events to the appropriate service instance.
4. The system of claim 3, further comprising an event queue mechanism assigned to each service thread instance for queuing events associated with said service instances received during service execution,
wherein events have an associated priority indicating the order in which the events should be executed, the event queue device allowing processing of received events according to its associated priority.
5. The system of claim 3, further comprising a class loader process for initially loading one or more service objects from said memory storage system according to a configuration file that implements initial service capabilities for said service node, said class loader being responsible for implementing said first object and any service object instances to be made available according to a predetermined service capability policy.
6. The system of claim 3, wherein the second object corresponding to a particular service includes a thread manager instance for comparing the number of thread instances associated with a service to a predetermined threshold determined in the service profile and generating an alert signal to the resource management device when the execution environment no longer supports implementation of a new service thread instance.
7. The system of claim 6, wherein the service object instantiation mechanism includes the network operating system, the resource management device additionally tracking the processing capabilities of the execution environment at each service node and providing an indication to the network operating system whether the execution environment at a service node can execute a service according to its processing capabilities.
8. The system of claim 7 wherein said resource management device further communicates an overload condition to said network operating system and prevents implementation of additional service object instances at an execution environment when the number of service threads currently executing at said execution environment exceeds said predetermined threshold.
9. The system of claim 3, wherein the instantiation mechanism comprises:
a registry of active service object threads corresponding to instances of services executing in an execution environment at each of said execution environments,
mapping means for mapping a service logical name with an object reference, the object reference being used by the network operating system to allow implementation of a requested service object thread instance in a native execution environment.
10. A method of providing services at service nodes in a communications network, each service node having a memory storage device and an execution environment for executing services in response to receipt of an event at a network switch element associated with a service node, the method comprising:
generating a service profile for each service node, the service profile including types and amounts of service object resources associated with service processes at each node, and downloading said types and amounts of service object resources to said node in accordance with said profile;
implementing a service object instance for execution in the one or more execution environments;
tracking execution environment resources at a service node is accomplished by maintaining a table of service types available at each service node, each service type having an associated capability status indicating whether a requested service is available for instantiation at a service node,
wherein when the capability status indicates that a requested service is not available for instantiation in the network, communicating to the central management device that a new service object instance needs to be implemented, thereby downloading and activating a new service object at a service node.
11. The method of claim 10, wherein the instantiating step comprises:
providing a first object for loading one or more service objects from said memory storage system and implementing said one or more object instances for execution within said execution environment in accordance with received service requests;
a second object corresponding to a particular service is provided for assigning one or more service threads to each service instance corresponding to each received said service request, each service thread instance having a unique identifier associated therewith.
12. The method of claim 11, further comprising the step of communicating messages and events generated during service object execution between one or more executing service objects to support service processing, the events and messages identified by the unique identifier to correctly execute a service instance by the second object.
13. The method of claim 12, further comprising the step of queuing events associated with an executing service instance received during service execution, said events having an associated priority indicating an order in which said events should be executed, wherein said received events are processed according to their respective priorities.
14. The method of claim 10, further comprising: one or more service objects are initially loaded from the memory storage system according to a configuration file that provides initial service capabilities for the service node, the class loader being responsible for implementing the first object and any service object instances to be made available according to a predetermined service capability policy.
15. The method of claim 11, wherein the step of tracking execution environment resources at a service node comprises:
comparing the number of thread instances associated with a service to a predetermined threshold determined in the service profile;
an alert signal is generated to the resource management device when an execution environment no longer supports the implementation of a new service thread instance.
16. The method of claim 10, wherein the instantiation mechanism comprises:
maintaining a registry of active service object threads corresponding to service instances executing in one execution environment of each of said service nodes,
mapping a service logical name to an object reference;
the use of the object reference allows implementation of a requested service object thread instance in a native execution environment.
HK02105474.2A 1998-10-20 1999-10-20 Method and apparatus for providing real-time call processing services in an intelligent network HK1044389A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US60/104,890 1998-10-20

Publications (1)

Publication Number Publication Date
HK1044389A true HK1044389A (en) 2002-10-18

Family

ID=

Similar Documents

Publication Publication Date Title
EP1131730B1 (en) Method and apparatus for providing real-time call processing services in an intelligent network
US6393481B1 (en) Method and apparatus for providing real-time call processing services in an intelligent network
US6425005B1 (en) Method and apparatus for managing local resources at service nodes in an intelligent network
US6804711B1 (en) Method and apparatus for managing call processing services in an intelligent telecommunication network
EP1103027B1 (en) Method and system for an intelligent distributed network architecture
US6594355B1 (en) Method and apparatus for providing real time execution of specific communications services in an intelligent network
US6690783B2 (en) Service application architecture for integrated network service providers
JP3924279B2 (en) Service application architecture for integrated network service providers
US6914969B2 (en) Service logic execution environment for telecommunications service components
HK1044389A (en) Method and apparatus for providing real-time call processing services in an intelligent network
US7072957B2 (en) Remote administration and monitoring of a service component in a service logic execution environment
MXPA01001277A (en) Method and system for an intelligent distributed network architecture