[go: up one dir, main page]

MXPA01001277A - Method and system for an intelligent distributed network architecture - Google Patents

Method and system for an intelligent distributed network architecture

Info

Publication number
MXPA01001277A
MXPA01001277A MXPA/A/2001/001277A MXPA01001277A MXPA01001277A MX PA01001277 A MXPA01001277 A MX PA01001277A MX PA01001277 A MXPA01001277 A MX PA01001277A MX PA01001277 A MXPA01001277 A MX PA01001277A
Authority
MX
Mexico
Prior art keywords
intelligent
network
service
functions
switching node
Prior art date
Application number
MXPA/A/2001/001277A
Other languages
Spanish (es)
Inventor
Kelvin Porter
Carol Waller
Robert Barnhouse
Doug Cardy
Ken Rambo
Wendy Wong
George Yao
Original Assignee
Mci Communications Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mci Communications Corporation filed Critical Mci Communications Corporation
Publication of MXPA01001277A publication Critical patent/MXPA01001277A/en

Links

Abstract

The present invention provides an intelligent call processor (172), an intelligent switching node and an intelligent communications network for use in a communication system (170). The intelligent call processor comprises a logical platform having a plurality of functions wherein at least one of the functions is service processing function (22), at least one of the functions is call processing (24), and at least one of the functions is facility processing (26), and a processor (172) for executing the plurality of functions. The intelligent switching node comprises an intelligent call processor (172) and, a resource complex (180) communicably linked to the intelligent call processor (172) and logically separated from the intelligent call processor (172). The intelligent communications network comprises a plurality of intelligent distributed network nodes, a network management system for monitoring and controlling a wide area network and the plurality of intelligent switching nodes, and the wide area network interconnecting the plurality of intelligent distributed network nodes and the network management system.

Description

METHOD AND SYSTEM FOR A DISTRIBUTED INTELLIGENT NETWORK ARCHITECTURE The present invention relates generally to network switching in a telecommunications system, and more particularly, to a method and system for a distributed intelligent network architecture for service processing. A network service is a function performed by a communications network, such as data or telephony, and its associated resources, in response to an interaction with one or more subscribers. For example, a subscriber may invoke a service resident in the telephony network, such as call forwarding or access to voice mail, by dialing a special sequence of digits. Other services in the network can be directed to the assistance of a network owner with security, validation, and authentication. The addition or modification of a service requires changes in the communications network. Most conventional telecommunication networks are composed of switches and interconnected communication devices. These switches are controlled by embedded or embedded processors, operated by proprietary software or firmware designed by the switch manufacturer. Normally, the Software or firmware of the switch manufacturer must support all functional aspects of service processing, call processing, facility processing, and network management. This means that, when a network owner wishes to implement a new service, or modify an existing service, the software of each switch in the network must be reviewed by the different switch manufacturers. The fact that the network contains different switch models from different manufacturers requires careful development, testing, and deployment of the new software. The time required to develop, test, and deploy the new software is lengthened, because the size of code grows in each larger and more complex switch with each new revision. Therefore, this process can take several years. In addition, this greater complexity also burdens the switch processors, increases the opportunities for malfunction of the switch, and may require modification or replacement of the switch. Moreover, the fact that multiple network owners depend on a common set of switch manufacturers, results in two undesirable situations that limit competition. First, the issuance of the manufacturer's software may try to incorporate the changes requested by several network owners, preventing This way, the owners of the network truly differentiate their services from the services provided by their competition. This also forces some network owners to wait until the manufacturer incorporates the requests of other network owners in the new broadcast. Second, a switch software release that incorporates a function as requested by a network owner to implement a new service may become unintentionally accessible to other network owners. These problems have become intolerable as the demand for new network services has increased exponentially during the last 5 to 10 years, due to the greater mobility of subscribers, to the greater variety and breadth of traffic band, to the dissolution of traditional numbering plans, more sophisticated services, and greater competition. Therefore, it is widely recognized that new network architectures need to incorporate a more flexible way to create, deploy, and execute the service logic. In order to fully appreciate the novel architecture of the present invention, described hereinafter, the following description of the relevant prior art is provided with reference to Figures 1 to 4. Referring to Figure 1, there is shown a logical representation of different switching architectures, including the present invention. A monolithic switch, which is generally denoted as 20, contains service processing functions 22, call processing functions 24, installation processing functions 26, and a switch structure 28. All these functions 22, 24, 26, and 28 are coded in hardware, intermixed and undifferentiated, as symbolized by group 30. Moreover, functions 22, 24, 26, and 28 are designed by the switch manufacturer, and operate on proprietary platforms that vary from manufacturer to manufacturer. As a result, these functions 22, 24, 26, and 28 can not be modified without the help of the manufacturer, which slows down the development and implementation of the service, and increases the cost of bringing a new service to the market. The development of new and innovative services, call processing, data processing, signal processing, and network operations, therefore, is limited by the manufacturer's control over its proprietary switch hardware and software, and the inherent difficulty of establishing and implement industrial standards. The service processing functions 22 are encoded inside the monolithic switch 20, and only allow local control of this process, based on the content of the local data and in the dialed number. This local information is interpreted by a manually coded process machine that performs the coded service function. The call processing functions 24 are encoded in the hardware, and provide the call origination and call termination functions. This process actually brings and drops individual connections to make a call. In the same way, the facilities processing functions 26 are also encoded in the hardware, and provide all the data processing related to the physical resources involved in a call. The switch structure 28 represents the hardware component of the switch and the computer for executing the monolithic software provided by the switch manufacturer, such as Northern Telecom, Inc. The structure of the switch 28 provides the physical facilities necessary to establish a connection, and may include, but is not limited to, support devices (IT and DSO), matrix switching devices (network plans and their processors), signal processors of the link layer (SS7, MTP, ISDN, LAPD), and specialized circuits (conference ports, audio tone detectors). In an attempt to solve the problems described above, International Telecommunications Union and the European Telecommunication Standards Institute, endorsed the ITU-T Intelligent Network Standard ("IN"). In a similar way, Bellcore endorsed the Intelligent Advanced Network Standard (WAIN). Although these two standards differ in their presentation and state of evolution, they have almost identical basic objectives and concepts, and in accordance with the above, these standards are seen as a single network architecture, where the service processing functions 22 are separated from the switch Using the IN • and AIN architectures, a network owner could presumably deploy a new service by creating and deploying a new Logical Service Program ("SLP"), which is essentially a table of Independent Building Blocks of the Service ("SIBB"), which will be invoked during a given type of call In accordance with this approach, a number of specific element types interoperate together with an SLP to provide services to subscribers in the network, as a result, any new or potential services are are limited by the existing SIBBs. The IN or AIN architecture, which is generally denoted as 40, logically separates the functions of the monolithic switch 20 into a Service Control Point ("SCP") 42, and a Service Switching Point ("SSP") and System of Switching 44. SCP 42 contains the functions of service processing 22, while the SSP and the Switching System 44 contain the call processing functions 24, the facilities processing functions 26, and the structure of the switch 28. In this case, the call processing functions 24 , the facilities processing functions 26, and the structure of the switch 28, are hardware coded, intermixed, and undifferentiated, as symbolized by group 46. The Service Switching Point ("SSP") is a module Functional that resides in a switch, in order to recognize when the signaling of a subscriber requires more than a simple routing based exclusively on the dialed number. The SSP suspends additional call handling while initiating a call handling request to the remote SCP 42, which acts essentially as a database server for a number of switches. This division of processing results in the unloading of the infrequent, and nonetheless delayed, task of handling special service calls, from the switch. In addition, this moderate centralization takes a balance between having an easily modifiable repository, heavy load, serving the entire network, against deploying a full copy of the depository on each switch. Referring now to Figure 2, it is shown a diagram of a telecommunications system employing an IN or AIN architecture, and is generally denoted as 50. Different client systems, such as an ISDN terminal 52, a first telephone 54, and a second telephone 56, are connected to the SSP and to the Switching System 44. The ISDN terminal 52 is connected to the SSP and the Switching System 44 via the signaling line 60 and the transport line 62. The first telephone 54 is connected to the SSP and the Switching System 44 via the transport line 64. The second telephone 56 is connected to a remote switching system 66 via the transport line 68, and the remote switching system 66 is connected to the SSP and the Switching System 44 via the transport line 70. As described above with reference to the Figure 1, SSP 70 is a functional module that resides in a switch, in order to recognize when a subscriber's signaling requires more than a simple routing, based on the dialed number. The SSP 70 suspends the additional handling of the call while initiating a request for the correct handling of the call. This request is sent in the form of messaging SS7 to a remote SCP 42. The Service Control Point 42 is named in this way because the fact of changing the content of the database in this location can alter the function of the network as appears for the subscribers connected through the many subnet switches. The request is sent via signaling line 72 to the Signal Transfer Point ("STP") 74, which is simply a router for SS7 messaging between these elements, and then through signaling line 76 to SCP 42. The Integrated Service Management System ("ISMS") 78 is envisioned as a management tool to deploy or alter services, or to manage subscriber access to services. The ISMS 78 operates primarily by altering the operational logic and the data stored inside the SSP 70 and the SCP 42. The ISMS 78 has different user interfaces 80 and 82. This ISMS 78 connects to the SCP 42 via the line of operations 84, the SSP, and the Switching System 44 through the line of operations 86, and the Intelligent Peripheral ("IP") 88 through the line of operations 90. The Intelligent Peripheral 88 is a device used to add functions to the network that are not available on the switches, such as a voice response or a speech recognition system. The IP 88 is connected to the SSP and the Switching System 44 via the signaling line 92 and the transport line 94. Referring now to Figures 2 and 3, will describe the processing of a call in accordance with the prior art. The call is initiated when the customer picks up the receiver and starts dialing in block 100. SSP 70 on the company's switch monitors the dialing, and recognizes the triggering sequence in block 102. SSP 70 suspends additional handling of the call, until the service logic can be queried in block 104. Then the SSP 70 composes a standard SS7 message, and sends it through the STP (S) 74 to the SCP 42 in block 104. The SCP 42 receives and decodes the message, and invokes the SLP in block 106. The SLI interprets the SLP, which can call to drive other functions, such as the query of the database for the translation of the number, in block 106 The SCP 42 returns an SS7 message to the SSP and to the Switching System 44 regarding the handling of the call, or otherwise dispatches messages to the elements of the network to carry out the correct service in block 108. To the conclusion of the call, the message S is sent S7 between the switches to break down the call, and call details records are created by each switch involved in the call in block 110. The call detail records are collected, correlated, and resolved off-line to each call, to derive billing for rate calls in block 112. The processing of the call is terminated in block 114.
The IN and AIN architectures try to define previously a standard set of functions to support all foreseeable services. These standard functions are all encoded in the hardware on different state machines in the switch. Unfortunately, new functions can not be implemented, which are likely to be presented in conjunction with new technologies or with unforeseen service needs, without an extensive burden and testing of the network software through many vendor platforms. In addition, if a new function requires changes to standardized call models, protocols, or interfaces, implementation of the service using that function may be delayed until the changes are ratified by a set of industry standards. But even when the standards have tried to expand the set of functions supported by IN and AIN, the equipment suppliers have refused to endorse these projected standards due to the stepwise increase in the complexity of the codes. Referring now to Figure 4, the process for creating the generic service according to the prior art will be described. The owner of the network requests a new function that involves a new service, a new call status, and a new protocol in block 120. If a new call model is requested in the block of decision 122, a proposal must be presented before the body of standards, and the network owner must await the industry's adoption of the new standard, which can take from 1 to 3 years, in block 124. After that the new standard is adopted, or if a new call model is not requested, as determined in decision block 122, the network owner must request and wait for code updates from each manufacturer, to implement the new function , which can take from 6 to 18 months, in block 126. The owner of the network must test the new function and all the previous functions for each manufacturer, which can take from 1 to 3 months, in block 128. If all these tests are unsuccessful, as determined in decision block 130, and the cause of failure is a design problem, as determined in decision block 132, the process must be restarted in block 1 22. However, if the cause of the failure is a code problem, as determined in decision block 132, the manufacturer must set the code in block 134, and must redo the test in block 128. If all tests are successful, as determined in decision block 130, and the manufacturer creates the service, as determined in decision block 136, the owner of the network must request a new version of service from the manufacturer, and wait for delivery of the tested version in block 132. However, if the network owner creates the service, as determined in decision block 136, the network owner must create a new version of the service using a creation tool, and iterate through the unit, testing to ensure that the new service works correctly, in block 140. In any case, then the owner of the network performs an integration test to ensure that all of the above services still operate properly, in block 142. Then a system test must be run to ensure proper coordination between the SCP and the switch in block 144. The network owner must then coordinate a load simulation of the new software issuance to all the switches and SSPs in the network, in block 146. The implementation of the new function ends in the bl o 148. Referring now to Figure 2, other limitations of the IN and AIN architecture are presented for having the functions of call processing and facilities processing, that is, SSP 70, operating inside the switch. As a result, these functions must be provided by each manufacturer of switches that use their proprietary software. Therefore, the Network owners still rely heavily on the manufacturer's software releases to support the new features. To further complicate the matter, the network owner can not test the SSP 70 modules in conjunction with other modules in a unified development and test environment. Moreover, there is no guarantee that an SSP 70 intended for the processing environment of a switch manufacturer will be compatible with the network owner's service creation environment. This reliance on multiple network owners in a common set of switch manufacturers results in two undesirable situations that limit competition. First, the software issuance of a manufacturer may try to incorporate the changes requested by several network owners, thus preventing network owners from truly differentiating their services from the services provided by their competitors. This also forces some network owners to wait until the manufacturer incorporates the requests of other network owners in the new broadcast. Second, a switch software release that incorporates a function as requested by a network owner to implement a new service may become unintentionally accessible to other network owners. Therefore, despite the intentions of the architects of IN and AIN, the creation, testing, and deployment of new services of the network owner are still prevented, because the network owner does not have complete control of, or access to, the functional elements that configure the behavior of the network service. In another attempt to solve these problems, a Separate Switch Intelligence architecture and Switch Structure ("SSI / SF"), which is generally referred to as 150 (Figure 1), logically separates the SSP 70 from the Switching System 44. now reference back to Figure 1, the intelligence of the switch 152 contains the call processing functions 24 and the facilities processing functions 26, which are encoded in separate state tables, with the state machine engines encoded in the corresponding hardware, which is symbolized by circles 154 and 156. The interface between the functions of the structure of the switch 158 and the intelligence functions of the switch 152, can be extended through a communication network, in such a way that the structure of the switch 158 and the intelligence of the switch 152 may not necessarily be physically located together, executed within the same processor, or even have a one-to-one correspondence. In turn, the intelligence of the switch 152 provides a consistent interface of simple functions not specific to the service, not specific to the manufacturer, common to all the switches. A Complex of Intelligent Computing ("ICC") 160 contains the service processing functions 22, and communicates with multiple intelligence elements of the switch 152. This approach offers the network owner advantages in the flexible implementation of services, due to that all, except for the most elementary functions, are taken out of the realm of the manufacturer's specific code. Additional improvements can be realized by providing a more unified environment for the creation, development, testing, and execution of the service logic. As discussed above, the current network switches are based on proprietary monolithic hardware and software. Although network switches can cost millions of dollars, this equipment is relatively slow in terms of processing speed, seen in light of the computer technology currently available. For example, these switches are based on Reduced Instruction Set Compute ("RISC") processors that run in the 60 lYIHz range, and communicate with each other using a data communications protocol, such as X.25. , which normally supports a transmission speed of 9.6 Kb / s between different platforms in a switching network. This is extremely slow, when it compares to personal computers, which contain processors running at 200 MHz or more, and high-end computing workstations that offer 150 Mb / s of FDDI and ATM interfaces. Consistent with the above, network owners need to have the ability to use high-end workstations, rather than proprietary hardware. The present invention may include an intelligent call processor, an intelligent switching node, and an intelligent communications network, for use in a communications system. The intelligent call processor may include a logical platform having a plurality of functions, wherein at least one of the functions is the service processing function, at least one of the functions is call processing, and at least one of the functions are the processing of facilities, and a processor to execute the plurality of functions. The intelligent switching node can include an intelligent call processor, and a complex of resources communicably linked to the intelligent call processor, and logically separated from the intelligent call processor. The intelligent communications network may include a plurality of intelligent distributed network nodes, a network management system for monitoring and controlling a network of wide area and plurality of intelligent switching nodes, and interconnecting the wide area network to the plurality of distributed intelligent network nodes and the network management system. The foregoing and other advantages of the present invention can be better understood by reference to the following description in conjunction with the accompanying drawings, in which: Figure 1 is a logical representation of different switching architectures, including the present invention. Figure 2 is a diagram of a telecommunications system employing a typical intelligent network configuration according to the prior art. Figure 3 is a flow chart for generic call processing according to the prior art. Figure 4 is a flow chart for the creation of generic service according to the prior art. Figure 5 is a diagram of a telecommunications system employing a distributed intelligent network architecture in accordance with the present invention. Figure 6 is a logical and functional diagram of a telecommunications system employing an intelligent distributed network architecture in accordance with the present invention. Figure 7 is a diagram illustrating the layers of the functional interfaces within an intelligent call processor in accordance with the present invention. Figure 8 is a Venn diagram that illustrates the nesting of processing contexts, whereby, a virtual machine supports a service logical execution environment in accordance with the present invention. Figure 9 is a diagram illustrating the class hierarchy of the objects managed within an intelligent call processor in accordance with the present invention. Figure 10 is a diagram illustrating the interaction of the managed objects in an example call processing scenario, in accordance with the present invention. Figure 11 is a flow diagram for generic call processing in accordance with the present invention. Figure 12 is a flowchart for the generic creation of services using objects administered in accordance with the present invention. Figure 13 illustrates the use of similar tools during the creation of the service, to create compatible objects for the same target environment in accordance with the present invention. Figure 14 illustrates how the palette for each tool can change in response to the new functional parts in accordance with the present invention. Figure 15 illustrates the flow of use of the Managed Object Creation Environment. Figure 16 illustrates the Managed Object Creation Environment Stack. Figure 17 illustrates how the unified execution environment also allows the creation and simplified modification of even the tools through which the developers made the objects for the SLEE. Referring now to Figure 1, a Distributed Intelligent Network Architecture ("IDNA") in accordance with the present invention is generally denoted 170. The present invention unifies ICC 160 and Switch Intelligence 152 of the SSI / architecture. SF 150 in an Intelligent Call Processor ("ICP") 172. Unlike the IN or AIN or SSI / SF 40 architectures, whose functions are defined in the state tables, the ICP 172 it contains the service control functions 22, the call processing functions 24, and the facilities processing functions 26, as objects managed in an object-oriented platform, which is symbolized by blocks 174, 176, and 178. ICP 172 is logically separated from Resource Complex 180. Referring now to Figure 5, a telecommunications system employing an intelligent distributed network architecture in accordance with the present invention will be described, and is generally denoted as 200. The Network Wide Area ("WAN") 202 is a system that supports the distribution of applications and data across a wide geographical area. The transport network is based on the Synchronous Optical Network ("SONET"), and connects the IDNA 204 Nodes, and enables applications within these nodes to communicate with each other. Each IDNA Node 204 contains an Intelligent Call Processor ("ICP") 172, and a Resource Complex 180 (Figure 1). Figure 5 illustrates an IDNA Node 204 having a Resource Complex A ("RCA") 206, and a Resource Complex B ("RCB") 208. The ICP 172 can be linked with Attachment Processors 210, which provide the functions of existing support, such as provisioning, billing, and restoration. Eventually, the functions provided by the Attachment Processors 210 could be absorbed by the functions within the Network Management System ("NMS") 212. The ICP 172 can also be linked with other ICPs 172, other networks (not shown), or other devices (not shown), through a direct link 214 having the signaling 216 and the support links 218. A direct link prevents latency between the connected devices, and allows the devices to communicate in their own language. The ICP 172 is the "brain" of the IDNA Node 204, and is preferably a general-purpose computer, which can be from a single processor with a single memory storage device, to a large-scale computer network depending on the processing requirements of the IDNA Node 204. Preferably, the general-purpose computer will have redundant processing, memory storage, and connections. As used herein, general purpose computers refer to computers that are, or can be assembled with, commercial shelf components, as opposed to dedicated devices specifically configured and designed for telephone switching applications. The integration of computers for general purposes within the network to call, provides numerous advantages. The use of computers for general purposes gives to ICP 172 the ability to scale up with additional hardware, to meet increased processing needs. These additions include the ability to increase processing power, data storage, and communications bandwidth. These additions do not require the modification of the software and / or manufacturer-specific hardware in each switch of the network to call. As a result, new services and protocols can be implemented and installed on a global scale, without modification of the individual devices in the switching network. Switching from monolithic switches 20 (Figure 1) to intelligent call processors 172, the present invention provides the above advantages and greater capabilities. In the case of applications that require more processing power, multi-processing allows the use of less expensive processors to optimize the price / performance ratio for call processing. In other applications, it may be convenient, necessary, or more effective for the cost to use more powerful machines, such as minicomputers, with higher processing speeds. 'ICP 172, as noted above, may comprise a group of computers for general purposes that operate, for example, in a UNIX or Windows NT operating system. For example, in a e application, supporting up to 100,000 ports in a single Resource Complex, the ICP 172 may consist of sixteen (16) 32-bit processors operating at 333 MHz in a Symmetric Multi-Processor group. Processors, for example, could be divided into four separate servers with four processors each. The individual processors would connect to a System Area Network ("SAN"), or other grouping technology. The group of processors could share access to modudata storage devices of a Redundant Set of Independent Disks ("RAID"). Shared storage can be adjusted by adding or removing modudisk storage devices. The servers in the preference groups would share redundant links with the RC 180 (Figure 1). As illustrated, and as the "connect and play" feature of personal computers, the ICP software architecture is an open processing model that allows having interchangeability of: (1) administration software, (2) ICP applications, ( 3) computer hardware and software, (4) complex resource components, and even (5) architecture and service processing. This generic architecture reduces maintenance costs, due to standardization, and provides the benefits derived from economies of scale. Accordingly, the present invention makes possible the division of development work and the use of modutools that result in faster development and implementation of services. Moreover, the use of, and the pertinent aspects of, the administration of services are within the control of the network operator on a basis as required, in opposition to the limitations imposed by the fixed messaging protocol or a particucombination of hardware and software supplied by a given manufacturer. Through the use of managed objects, the present invention also allows services and functions to be distributed flexibly ("where you want it") and dynamically ("on the fly") through the network, based on any number of factors, such as the capacity and the use. The performance is improved because the processing of services 22 (Figure 1), the processing of calls 24 (Figure 1), and the processing of facilities 26 (Figure 1) operate on a homogeneous platform. In addition, the present invention allows the monitoring and manipulation of call sub-elements that could not be accessed before. The present invention also allows the network operator to monitor the use of functions or services, so that, when they are obsolete or not used, they can be eliminated.
The Resource Complex ("RC") 180 (Figure 1) is a collection of physical devices, or resources, that provide support, signaling, and connection services. The RC 180, which may include the intelligent peripherals 88, replaces the switch structures 28 and 158 (Figure 1) of the IN or AIN or SSI / SF architecture. Unlike the IN or AIN architecture, the Resource Complex control, such as the RCA 206, is at a lower level. Moreover, the RCA 206 may contain more than one switch structure 158. The switch structures 158 or other client interfaces (not shown), are connected to multiple subscribers and switching networks by means of conventional telephony connections. These customer systems may include ISDN terminals 52, fax machines 220, telephones 54, and PBX systems 222. The ICP 172 controls and communicates with the RC 180 (Figure 1), the RCA 206, and the RCB 208, through of a high-speed data communications conduit (minimally Ethernet connection of 100 Mb / second) 224. The RC 180, 206, and 208 can be analogous with a printer, and the ICP 172 can be analogous with a personal computer, in where the personal computer uses a driver to control the printer. The "controller" in the IDNA Node 204 is a Resource Complex Proxy ("RCP") (not shown), which will be described later with reference to Figure 6. This allows manufacturers to provide a node that complies with IDNA using this interface, without having to rewrite all its software to incorporate the IDNA models. In addition, the control of Resource Complex 180 (Figure 1), RCA 206, and RCB 208, is at a lower level than typically provided by the AIN or IN architecture. As a result, resource complex manufacturers only have to provide an interface to support the management of the installation and the network; they do not have to provide the network owner with a specific processing of calls and services. An interface at a low level is abstracted in more discrete operations. Having a single interface allows the network owner to select from a wide spectrum of Resource Complex manufacturers, basing their decisions on price and operation. Intelligence is added to ICP 172 instead of RC 180, which isolates RC 180 from changes, and reduces its complexity. Because the role of the RC 180 is simplified, changes are made more easily, thus making it easier to migrate towards alternative switching and transmission technologies, such as the Asynchronous Transfer Mode ("ATM"). Intelligent Peripherals ("IP") 88 provide the ability to process and act on the information contained within the transmission path of the real call. The IPs 88 are generally a separate Resource Complex, such as the RCB 208, and are controlled by the ICPs in a manner similar to the RCA 206. The IPs 88 may provide the capability to process data in the call transmission path. real time, in real time, using the Digital Signal Processing ("DSP") technology. The Network Management System ("NMS") 212 is used to monitor and control the hardware and services in the IDNA 200 Network. A suggested implementation of NMS 212 could be a structure that complies with the Telecommunications Administration Network ( "TMN") that provides management of the components within the IDNA 200 network. Specifically, the NMS 212 controls the deployment of services, maintains the health of these services, provides information about these services, and provides a function of administration at the network level for the IDNA 200 Network. The NMS 212 has access to and controls the services and the hardware through the agent functionality within the IDNA nodes 204. The ICP-NMS Agent (not shown) within the IDNA node 204, performs commands or requests issued by NMS 212. NMS 212 can directly monitor and control RCA 206 and RCB 208 through a standard operations link 226. Environment of Object Creation Managed ("MOCE") 228 contains the subcomponents to create services that run on the IDNA 200 Network. A Service Independent Building Block ("SIBB") is embedded, and API representations that a service designer uses to create new services , inside the primary subcomponent of the MOCE, a Graphical User Interface ("GUI"). The MOCE 228 is a unified collection of tools hosted in a single environment or user platform. It represents the collection of operations that are required throughout the process of creating the service, such as service documentation, definition of managed object, interface definition, protocol definition, and data entry definition, which are encapsulated in managed objects, and in the service test. The network owner only has to develop a service once using the MOCE 228, because the managed objects can be applied to all the nodes in his network. This contrasts with the owner of the network that has to make each of the different switch manufacturers to develop their version of the service, which means that the service must be developed multiple times. The MOCE 228 and the NMS 212 are connected to each other through a Depository 230. The Depository 230 contains the managed objects that are distributed by the NMS 212 and used in IDNA 204 nodes. Depository 230 also provides a buffer zone between the MOCE 228 and the NMS 212. However, the MOCE 228 can be directly connected to the NMS 212 to perform the network test "live" ", which is indicated by the dotted line 232. Referring now to Figure 6, a logical and functional diagram of a telecommunication system employing a distributed intelligent network architecture 200 in accordance with the present invention will be described. It is shown that ICP 172 contains an ICP-MS 240 Agent, and a Service Layer Execution Environment ("SLEE") 242, which in turn hosts a variety of managed objects 246, 248, 250, and 252, derived from the base class of managed objects 244. In general, managed objects are a method for packing functions of the software, where each managed object offers both functional and administrative interfaces, to implement the functions of the managed object. The administration interface controls access to who and what can access managed object functions. In the present invention, all the telephony application software, except for the infrastructure software, executed by the IDNA Node 204, is deployed as managed objects and support libraries. This provides a uniform interface and a implementation to control and administer the IDNA Node software. The collection of network elements that connect, route, and terminate the support traffic handled by the node, will be collectively referred to as the Resource Complex ("RC") 180. The service processing applications that are executed in the SLEE, they use the Resource Proxy ("RCP") 244 as a control interface with the RC 180. The RCP 244 can be linked to a device driver, because it adapts the independent commands of the equipment from the objects in the SLEE to the specific commands of the equipment to be performed by the RC 180. The RCP 244 can be described as an interface that implements the common basic commands among the vendors of the resources in CPR 244. The CPR 244 could be implemented as shown, as one or more managed objects running on the IDNA node 204. Alternatively, this function could be provided as part of the RC 180. The NMS 212, the Depository 230, and the MOCE 228 are consistent s with the description of these elements in the discussion of Figure 5. Note that the operations link 226 directly connects the NMS 212 with the RC 180. This corresponds to the more traditional role of a network management system in the monitoring of the state operation of the network hardware.
This can be done independently of the IDNA architecture (for example, using the well-known TMN approach). In addition, the RC 180 can be connected to other resource complexes 254. A direct signaling link 214 is also shown entering the ICP 172, such that the signaling 216, such as SS7, can enter the signaling processing environment. calls directly. By intercepting the signaling at the periphery of the network, the SS7 message can go directly to the ICP 172 without passing through the RC 180. This reduces the latency and improves the robustness, reducing the path of the signaling. An accompanying support link 218 connects to the RC 180. Figure 7 illustrates the layers of the functional interfaces within the ICP 172. The MOCE 228 is the system where the managed object software and its dependencies are generated. The NMS 212 controls the execution of the ICP 172 by interconnecting with an agent function provided within the ICP 172, called the ICP-NMS Agent 240. The NMS 212 controls the operation of the Local Operating System ("LOS") 260 in the ICP 172. The NMS 212 controls the operation of the ICP 172, including the start and stop of the processes, the interrogation of the contents of the process table, and the status of the processes, the configuration of the parameters of the operating system, and the monitoring of the operation of the general purpose computer system that hosts the ICP 172. The NMS 212 also controls the operation of the Wide Area Network Operating System ("WANOS") 262. The NMS 212 controls the initialization and operation of the WANOS support processes, and the configuration of the WANOS libraries, by means of its control of the LOS 260, and any other interfaces provided by the SLEE control of the NMS. The NMS 212 controls the instantiation and operation of one or more SLEEs 242 that run on an ICP 172. The LOS 260 is a commercial shelf operating system for operating the computer for general purposes. The W7ANOS 262 is a medium software bundle of the commercial shelf (for example, an object request corridor) that facilitates seamless communication between the computing nodes. The SLEE 242 hosts the execution of the managed objects 244, which are instances of the software that implement the service processing architecture. The SLEE 242 implements the means to control the execution of the managed objects 244 by means of the ICP-NMS Agent 240. Consequently, a SLEE 242 instance is a software process capable of deploying and removing the software from managed objects, of instantiating and destroying instances of managed objects, to support the interaction and collaboration of managed objects, to manage access to native libraries 264, and to interconnect with Agent NMS-ICP 240 in the implementation of the required controls. Native Libraries 264 are libraries that are coded to depend only on LOS 260 or WANOS 262, and the native execution of the computer for general purposes (for example, compiled libraries C). They are used primarily to supplement the native functionality provided by the SLEE 242. The SLEE libraries 266 are coded libraries to run on the SLEE 242. They can access the functions provided by the SLEE 242 and by the Native Libraries 264. The managed objects 244 they are the software loaded and executed by the SLEE 242. They can have access to the functionality provided by the SLEE 242 and by the SLEE libraries 266 (and possibly the native libraries 264). The ICP-NMS Agent 240 provides the NMS 212 with the ability to control the operation of the ICP 172. The ICP-NMS Agent 240 implements the capability to control the operation and configuration of the LOS 260, the operation and configuration of the WANOS 262, and the instantiation and operation of SLEE (S) 242. The proposed service processing architecture operates in layers of increasing abstraction. However, from the perspective of SLEE 242, there are only two layers: the layer of the administered object 244, which is the layer of objects (software instances) that interact under the control of the BMS 212; and Library layer 264 or 266, which is the software layer (either native to SLEE 242 or LOS 260) that provides complementary functions for the operation of managed objects 244 or SLEE 242 itself. However, it is anticipated that, at some point, the NMS 212 may reject control of the exact location of instances of managed objects. For example, instances of managed objects may be allowed to migrate from one node to another, based on one or more algorithms or events, such as in response to demand. Figure 8 shows the nesting of the processing contexts within an ICP 172, such that the SLEE 242 is implemented inside a virtual machine 270. A virtual machine 270 is started as a process inside an LOS 260 in an ICP 172. Then, the SLEE administration code is loaded and executed as the main program 272 by the VM 270 process. The SLEE administration code that is executed as the main program 272 is interconnected with the functionality of the ICP-Agent. NMS 240, and supervises the creation and destruction of the managed object instances 274 from the class table 276. For example, the managed object X, which resides in the class table 276, may have multiple instances to be explained, and then each managed object X is instantiated as necessary as Xlf X2, and X3, either under the control of the NMS, or during the course of the processing services requested by the subscribers. The use of a Virtual Machine 270 has implications for the creation of the service, as well as the logical execution of the service. The IN and AIN architectures revolve around the services that are encoded as status tables. These descriptions of state tables are interpreted by an engine of a state machine encoded in the hardware, which performs the function of the coded service. As a result, the MOCE 228 and the Logical Service Interpreter ("SIL") are very independent, and provide only a fixed palette of functions. If a new desired service requires the addition of a new function of the building block, both the MOCE 228 and the SLI must be changed, recompiled, fully tested, and deployed in a coordinated manner. In an IN or AIN architecture, the deployment of the new SLI code requires a short time out in the network. In contrast, the present invention provides a multiple concurrent architecture that allows both old and new SLIs to coexist. The present invention uses a virtual machine 270 to overcome these disadvantages. A virtual machine 270 is the functional equivalent of a computer, programmable at such an elementary level of function (ie, logical operators, variables, conditional derivatives, etc.), that a resident program can express essentially any conceivable logical function, even those that are not easily expressed as a model of finite state. The universality of a virtual machine 270 is especially useful in this application, to allow the expression of the call processing logic in ways that can be preferred over a state table. This differs from a logical interpreter, which normally supports higher level functions, and is limited in the semantics of the program and in the flexibility of expression. In the IN and AIN architectures, the SLI supports a limited structure and a limited set of functions. When the virtual machine software 270 is run on a computer for general purposes, the virtual machine 270 can be viewed as an adapter layer. The code that runs as a program inside the virtual machine 270 can have the same granularity of control and access to input / output and storage, as if it were running directly on the processor, and yet the same program can be portable up to a totally different processor hardware that runs an equivalent virtual machine environment (ie, operating on media heterogeneous environments). In a preferred embodiment, the "Java" platform developed by Sun Microsystems is prescribed to express all telephony application software. The prevalence of Java provides practical advantages in the portability of the platform, the ubiquity of the development tools and the sets of capabilities, and the existing support protocols, such as FTP and HTTP. Java accommodates object-oriented programming in a way similar to C ++. The SLEE Management Code 272 and all managed objects 276 indicated in the SLEE 242 are encoded as Java byte codes. The SLEE Administration Code 272 includes functions to install, remove, and instantiate classes, to request and suppress instances, and to assert global values and the state of execution / detention. Despite the above advantages, the use of a virtual machine such as a SLEE 242, in particular, a Java virtual machine, seems to have been overlooked by the architects of IN and AIN. Perhaps forced by the most common telephony applications, such as interactive voice response, IN and AIN designers have thought that a fixed palette of functions is adequate and preferable because of its apparent simplicity and its similarity to call processing models. traditional Although the AIN approach improves the speed of service creation only within a call model and set of fixed functions, the present invention can easily evolve the entire implicit service structure to meet new service demands and new paradigms of call processing . The choice of an object-oriented SLEE 242 provides many key advantages, including dependency management and shared security between co-incumbent objects. The advantages of object-oriented programming, such as modularity, polymorphism, and reuse, are realized in the SLEE 242 in accordance with the present invention. Due to the inheritance hierarchy of the managed objects, widely extended changes can be made in the call model, protocol, or some other aspects of call processing, by relatively localized code changes, for example, to a single base class. Another important advantage is that the coded classes from which the objects are instantiated within each SLEE 242 can be updated without having to disable or bounce the SLEE 242. In a preferred embodiment, a set of operational rules can be coded to allow or restrict the deployment of the new new class implementation code to SLEE 242 or to the instantiation of objects from it, based on physical location or operating conditions. These rules can be encoded in different locations, such as part of the image of the managed object that uses the NMS 212 for deployment, or in the actual object code that is activated by the SLEE 242. In any case, the NMS 212 would have error handling procedures for when instantiations fail. The location constraints could be any means to characterize the physical location of the node (eg, nation, state, city, street, or global coordinates). In addition, a method can be adopted to resolve conflicts between the operating rules within the set. For example, if you are going to instantiate a specific object in node X, which remains in both Region A and Region B, and the set of operating rules provides that the instantiation of the specific object in Region A is prohibited, but is allowed in Region B, a conflict arises over whether the specific object can be instantiated or not in X mode. However, if a conflict resolution rule simply states that objects can be instantiated only where allowed, it is resolves the conflict, and the specific object is not instantiated on the X node. This set of operational rules could be used to restrict the deployment or instantiation of a Trunks management class code, to situations where the intelligent call processor is actually managing trunk resources. These rules could also be used to restrict the billing processor instances, which are tailored to the billing regulations of a specific state, up to the limits of that state. As mentioned above, these location restriction rules can be internal or external to the object of the class. Referring now to Figure 9, the class hierarchy of the managed objects will be described in accordance with a preferred embodiment of the present invention. Objects managed by abstract base classes 244 include common functionality and virtual functions to ensure that all derived classes can be properly supported as objects in SLEE 242. Specifically, four distinct subclasses are shown, service control class 252, the call control class 250, the support control class 242, and the resource proxy class 246. The service control class 252 is the base class for all service function objects. Session management class 280 encapsulates the information related to the session and its activities. A session can understand one or more calls or other invocations of network functions. Session manager class 280 provides a unique identifier for each session. If the processing of the call is taking place in a nodal manner, then the billing information must be checked. A unique identifier for each call facilitates collation, rather than requiring expensive correlation processing. In the processing of services, the protocols are wrapped by successive layers of abstraction. Eventually, the protocol is abstracted enough to guarantee the allocation / instantiation of a session manager (for example, in SS7, receiving an I / AM message would guarantee having the session manager). The kind of support capacity 282 changes the quality of the service in a support. A service control class 252 can enable changes in the Quality of Service ("QoS") of a call, or even change the support capacity, such as moving from 56 kbits / second to higher speeds, and then back. The QoS is administered by the connection manager class 302. For example, a subclass at Half Speed 284, degrades the QoS of a call up to a sampling rate of 4 KHz, instead of the usual sampling rate of 8 KHz. A subclass of stereo 286 could allow a user to form two connections in a call, to support left channel and right channel. The service arbitration class 288 encodes the mediation of service conflicts and service interactions. This is required because the service control classes 252 may have conflicts, particularly in the origination and termination services. For many practical reasons, it is undesirable to encode within each service control class 252, an awareness of how to resolve the conflict with each other type of service control class 252. Instead, when a conflict is identified, the references to conflict services and their pending requests are passed to service arbitration class 288. The service arbitration class 288 can then decide on the appropriate course of action, perhaps taking into account the local context, the configuration data , and the following requests to conflicting service objects. Having a 288 service arbitration class allows for explicit documentation and coding of the conflict resolution algorithms, as opposed to hardware-coded or implicit mechanisms. Moreover, when a service is updated or added, existing services do not have to be updated to take into account any conflict changes, which may require of the change of multiple relationships within a single service. Feature class 290 implements the standard set of capabilities associated with telephony (eg, three-way call, call waiting). One of these capabilities may be an override 292 to enable an origination to disconnect an existing call, in order to reach a intended recipient. Another common capability may include a call block 294, where an offer of origination may be rejected based on a set of criteria about the origination. Service discrimination class 296 is used to selectively invoke other services during call processing, and is put as a subclass as a service itself. Service discrimination class 296 provides flexible and context-sensitive service activation, and obviates the need to have a fixed code inside each service object to determine when to activate the service. The activation sequence is isolated from the service itself. For example, Subscriber A and Subscriber B have access to the same set of characteristics. Subscriber A chooses to selectively invoke one or more of its services using a particular set of signals. Subscriber B prefers to use a different set of signs to activate their services. The only difference between subscribers is the way in which they activate their services. So it is desirable to divide the selection process of the service itself. There are two solutions available. The service selection process for Subscribers A and B may be encoded in separate service discrimination classes 296, or a service discrimination class may use a profile per subscriber to indicate the appropriate information. This can be generalized to apply to more users whose service sets are disjoint. In addition, the use of a service discrimination class 296 may alter the mapping of access to services, based on the context or progress of a given call. The implementation of this class allows several participants in the call to activate different services using perhaps different activation inputs. In the prior art, all vendors of switches provided inflexible service selection schemes, which impeded this capability. The independent service class of the medium 298 is a type of service control class 252, such as store and send 300, transmit, redirect, priority, QoS, and multi-party connections, which applies to different types of media, including voice, fax, email, and others. If a control class is developed service 252 that can be applied to each type of medium, then service control class 252 can be decomposed into reusable service control classes 252. If service control class 252 is decomposed into media-dependent functions, and a media-independent function (that is, an SC independent of the media that implements a service, and a SC that depends on the media - one per media type), as derived from the independent class of media 298, store and send 300 provides the generic capacity to store a message or data stream of some kind of medium, and then the ability to deliver it later, based on some event. Redirection provides the ability to move a connection from one logical address to another, based on the specified conditions. This concept is the basis for sending the call (all types), ACD / UCD, WATS (services 1-800), find me / follow me, and roamer mobile, et cetera. The preference, whether negotiated or otherwise, includes services such as call waiting, priority preference, and so on. The QoS-modulated connections implement future services over packetized networks, such as voice / fax, streaming video, and file transfer. Multi-party connections include 3 ways, and N-way videoconferencing, and so on. Although user control and input is implemented Primarily using the keys of a telephone, it is expected that voice recognition will be used for control and user input in the future. The class of the connection manager 302 is responsible for coordinating and arbitrating the connections of different support controls involved in a call. Therefore, the complexity of managing connectivity between parties in multiple calls is encapsulated and removed from all other services. Service and call processing is decoupled from connections. This breaks the paradigm of mapping calls to connections as one to many. Now the mapping of calls to calls is from many to many. Connection manager classes 302 within an architecture are designed to operate independently or to collaborate as partners. In the operation, the service control classes 252 present the connection manager classes with 302 requests to add, modify, and remove call segments. It is the responsibility of the connection manager class 302 to make these changes. Note: Because the connections can be considered as resources in and of themselves, or as the attributes of the resources, a connection manager class 302 can be implemented as a proxy or as an aspect of the basic management functions from means . The call control class 250 implements the essential processing of the call, such as the basic finite state machine commonly used for telephony, and specifies the manner in which the processing of a call will take place. Two classes can be derived along the functional division of the origination (make a call) 304, and termination (accept a call) 306. The kind of support control 448 is directed to the adaptation of specific signals and events to and from from the Resource Complex 180, through the proxy 246, in common signals and events, which can be understood by the call control objects 250. An anticipated role of an object derived from this class, is to collect information about of the origination end of a call, such as the subscriber's line number, the service class, the type of access, and so on. The subclasses can be differentiated based on the number of circuits or channels associated with signaling. These may include a class associated with channel 308, as applied to the single signaling channel by 23 support channels in an ISDN Primary Interface 310, a single channel class 312, as typified by an analog telephone 314 that uses dialing. to control a single circuit, and the common channel class 316, represented by the signaling SS7 318 entirely dissociated from the support channels. Resource Proxy class 246 is dedicated to interconnecting the execution environment with the real-world switches and other elements in the support network. Examples of internal states implemented at this level, and inherited by all descendant classes, are in service against out of service, and free against in use. Derived classes contemplated with the telephone 320 (a standard proxy for a standard 2500 device), voice-responsive units ("VRUs") 322 (a standard proxy for voice response units), IMT 324 trunk connections (a standard proxy for digital trunk circuits (Ti / El)), and 326 modem connections (a standard proxy for digital modems), corresponding to the specific resource types in Resource Complex 180. Referring now to the Figure 10, the dynamic logic relationship of some instantiated objects will be shown. A real-world telephone A 330 is coupled with a chain of objects in the SLEE 242 through a Resource Complex Proxy (not shown). Objects RC_Phone A 332, BC_Phone A 334, and CC_Orig A 336, remain instantiated in SLEE 242 at all times. The change of status and messaging are presented between these objects whenever the real-world telephone is hung or dropped, or when press the keyboard. In the same way, the telephone B 338 is represented in the SLEE 242 by a chain of objects RC_Phone B 340, BC_Phone B 342, and CC_Term B 344. An instance of the Call Block B 346 is associated with CC_Term B 344, indicating that Subscriber B has previously set a call blocking function in effect for telephone B 338. When Subscriber A picks up, RC_Phone A 332 receives the notification, and sends it to BC_Phone A 334, which propagates notification to Administration_ Session A 348, to start a session. Administrator_A Session A 348 algorithmically determines the default service control class associated with the start of the session (that is, it searches the configuration specified as the default for RC_Phone A 332). Administration_Session A 348 finds that Discriminator_Service A 350 is the default service control class, and invokes it. Discriminador_Servicio A 350 directs BC_Phone A 344 to collect enough information to determine the service that is finally activated (for example, it asks Subscriber A to dial the service code and / or the destination digits). In this example, Service Discriminator A 350 determines whether Subscriber A intends to activate a Store_and_Send service 352 (for example, a voicemail feature), or a call to Media Speed 354 (a service that adjusts support capacity, reduces bandwidth by half), or Cancel 356 (a service that forces a terminator to accept an origination). Subscriber A dials the digits to indicate the activation of Override to Telephone B 338. Service Discriminator 350 activates the Override feature 356. The Override 356 service control collects enough information to determine where Subscriber A wants to call. Null service control 356 invokes the call origination control (CC_Orig A 336), to offer the call by means of Connection Manager A 358. The Connection_Manager A 358 makes contact with the call termination control, CC_Term B 344, which makes contact with the service of Block_Call B 346 that has been activated in it. The Call Block service 346 notifies Administrator_Connection A 358 through CC_Term B 344, that the call has been rejected. CC_Orig A 336 has instructed the Connection Manager A 358 not to accept a rejection due to the control of the Cancel 356 service. Now the services of Cancel 456 and Block_Call 346 are in conflict. The Administrator_Connection 358 invokes the Service Arbitration Service 360, citing the conflict. The Service Arbitration Service 360, based on the information presented algorithmically, determines a winner (for example, the call termination control must accept the call). CC_Term B 344 accepts the origination attempt, and propagates the appropriate signaling to the BC_Phone B 342 and to the RCJPhone B 340. The B 338 phone starts ringing, and Subscriber B answers. The resulting response event is passed through CC_Term B 344 all the way to CC_0rig A 336. At this point, Administrator_Connection A 358 establishes the speech path, and Subscribers A and B are talking. Now the call is in a stable state. Service Manager A 348 records the successful completion of the call. Now, both call controls 336 and 344 are waiting for a termination signal that will end the call. Subscriber B hangs up. The message is propagated to both call controls 336 and 344. The call controls 336 and 344 terminate their participation in the call. Connection Manager A 358 cuts off the connection, and Session Manager 348 records the termination of the call. Subscriber A hangs up, and Service Administrator 348 passes the call record to the billing system. As experts in the field will know, we can move away from the value of flexible instantiation of objects on demand against performance gains by instantiating and managing instances before they are needed.
Figure 11 is a flowchart of the steps of the process for generic call processing in accordance with the present invention, where interactions take place in a high-speed environment, and the intelligence of call processing can be applied from the beginning of a given call. The customer picks up the receiver and begins dialing in block 370. The line condition and each set of dialed digits appear as incremental events inside the IC / SLEE through the RCP, or alternatively, as the signaling is sent directly from the central office to the ICP on a direct SS7 link in block 372. The resource control, the support control, and the call control instances associated with the line, respond to each event and objects, and instances of the objects of service as required in block 374. Service objects may apply additional interpretation to subsequent events, and may instantiate other service objects. The interactions between the objects of resource control, support control, call control, and service control, plus any database resources, are presented within a high-speed environment. Commands for resource control to implement the service are dispatched through the RCP, and a comprehensive record of the call activity is stored, or processed immediately for the purposes of billing in block 376. The processing of a single call or session is completed in block 378. Figure 12 illustrates the steps of the process for the generic creation of services using objects managed in accordance with the present invention. The creation of the service using managed objects is completely within the control of the owner of the network, is considerably faster, and is done within a unified environment using a consistent set of tools. A new function is requested that involves a new service, a new call status, and a new protocol, in block 380. The owner of the network uses the designers or programmers of the service to modify the managed objects (support control , call control, and service control) as needed in block 382. Testing the iterative unit using new versions of objects managed in a test SLEE until the new function is verified in block 384. The test of integration of new versions of objects managed in conjunction with only the other objects and system parts that interact with the modified objects, is done in block 386. The NMS is used to deploy the new objects administered to the ICPs in block 388. The implementation of the new function is completed in block 390.
Figure 13 illustrates the use of similar tools during the creation of the service, to create compatible objects for the same target environment, in accordance with the present invention. In the MOCE 228, developers of different types of functionalities (Context A 400, Context B 402, and Context C 404) use similar tools (Tool A 406 and Tool B 408) to create competing objects (MO Type 1 410, MO Type 2 412, and MO Type 3 414) for the same target environment. The palette (Palette A 416, Palette B 418, and Palette C 420) for each tool (Tool A 406 and Tool B 408) is appropriately different for the type of development. Each managed object (MO Type 1 410, MO Type 2 412, and MO Type 3 414) is created by the combination of input data (Input Form MO Type 1 A 422, Input Form MO Type 2 A 424, and Form MO Input Type 3 AD 426), and context information (Context Information A 428, Context Information B 430, Context Information C 432), using the tools (Tool A 406 and Tool B 408) and the palettes ( Palette A 416, Palette B 418, and Palette C 420). Then the managed objects (MO Type 1 410, MO Type 2 412, and MO Type 3 414) are stored in Depository 230. Figure 14 illustrates how the palette for each tool can change in response to the new functional parts in accordance with the present invention.
The palette for each tool can change in response to the. new functional pieces introduced by other developers. Figure 15 illustrates the flow of use of the Managed Object Creation Environment. The type of software component is selected in block 450, and the configuration is selected in block 452, and the appropriate tool is launched in block 354. The user can select tool A 456, tool B 458, or tool C 460. Next, the results are collected in block 462, and the configuration is updated in block 464. Figure 16 illustrates the Software Stack of the Managed Object Creation Environment. The basis of the Managed Object Creation Environment Software Stack is the development infrastructure 470. The development infrastructure 470 is interconnected with the software configuration database 472 to read and store the information pertinent to the creation of managed objects. The user creates the managed objects using the software creation tools A 480, B 482, and C 484, which in turn use the tool adapters A 474, B 476, and C 478, to interconnect with the development infrastructure 470 Figure 17 illustrates the way in the environment Unified execution also allows the simplified creation and modification of even the tools by which the developers were the authors of the objects for the SLEE. A few preferred embodiments have been described in detail hereinbefore. It is to be understood that the scope of the invention also comprises embodiments other than those described, and notwithstanding that they fall within the scope of the claims. For example, it is understood that the general purpose computer is a computing device that is not made specifically for a type of application. The general purpose computer may be a computing device of any size that can perform the functions required to implement the invention. A further example is the "Java" programming language, which can be replaced with other equivalent programming languages having similar characteristics, and performing similar functions, as required to implement the invention. The use of these terms herein, as well as the other terms, is not intended to limit the invention to these terms only. The terms used can be exchanged with others that are synonyms and / or refer to equivalent things. The words of inclusion must interpreted as non-exhaustive when considering the scope of the invention. It should also be understood that different embodiments of the invention may be employed or incorporated into the hardware, software, or microcoded firmware. Although the present invention has been disclosed and described in connection with the above-described embodiment, it will be apparent to those skilled in the art that numerous changes, variations, and modifications are possible within the spirit and scope of the invention. In accordance with the foregoing, therefore, it is intended that the following claims encompass these variations and modifications.

Claims (66)

1. An intelligent call processor for use in a communication system, the intelligent call processor comprising: a logical platform having a plurality of functions, wherein at least one of the functions is a service processing function, at least one of the functions is call processing, and at least one of the functions is installation processing; and a processor for executing the plurality of functions.
2. The intelligent call processor as described in claim 1, wherein the logical platform is a virtual machine.
3. The intelligent call processor as described in claim 2, wherein the plurality of functions are encoded in an object-oriented language.
4. The intelligent call processor as described in claim 2, wherein the plurality of functions are encoded as Java byte codes.
5. The intelligent call processor as described in claim 1, wherein the logical platform is an object-oriented platform, and the plurality of functions are managed objects.
The intelligent call processor as described in claim 1, wherein the processor includes: at least one computer for general purposes; at least one data storage device; and a high-speed communications link that connects the computer for general purposes with the data storage device.
The intelligent call processor as described in claim 5, wherein the general purpose computer is based on a microprocessor architecture.
The intelligent call processor as described in claim 5, wherein the general-purpose computer, the data storage device, and the high-speed network are configured to provide redundant processing, data storage, and communications .
9. The intelligent call processor as described in claim 1, which further includes a first connection for communicably linking the processor to an adjunct processor that provides at least one 1 legal function consisting of the group of: provisioning, billing, and restoring the service.
10. The intelligent call processor as described in claim 1, which further includes a second connection for communicably linking the processor to a direct signaling link.
The intelligent call processor as described in claim 1, which further includes a third connection for communicably linking the processor with at least one resource complex.
12. The intelligent call processor as described in claim 1, which further includes a fourth connection for communicably linking the processor to a wide area network.
The intelligent call processor as described in claim 1, wherein the processor provides a flexible architecture that allows new and old functions to coexist.
The intelligent call processor as described in claim 1, wherein the plurality of functions can be carried to at least two computing devices other than different computing architectures.
15. An intelligent switching node in a communication network, comprising the switching node intelligent: an intelligent call processor that has a processor to execute a plurality of functions within a logical platform, where at least one of the functions is service processing, at least one of the functions is call processing, and at least one of the functions is installation processing; and a complex of resources communicably linked with the intelligent call processor, and logically separated from the intelligent call processor.
16. The intelligent switching node as described in claim 14, wherein the communication link from the resource complex to the intelligent processor further comprises a redundant communication link.
17. The intelligent switching node as described in claim 14, wherein the communication link from the resource complex to the intelligent processor is a high-speed data communications link.
18. The intelligent switching node as described in claim 14, wherein a resource complex proxy includes an interface from the intelligent call processor to the resource complex.
19. The intelligent switching node as described in claim 14, wherein the resource complex includes a collection of physical devices or resources that provide support, signaling, and connection services.
The intelligent switching node as described in claim 14, wherein the resource complex includes an interface that is connected to a plurality of subscribers and switching networks by means of conventional telephony connections.
21. The intelligent switching node as described in claim 14, wherein the resource complex includes a plurality of interfaces with the client systems.
22. The intelligent switching node as described in claim 14, wherein the resource complex uses a standardized interface to support the management of the network and installation administration.
23. The intelligent switching node as described in claim 14, wherein the resource complex includes at least one switch structure.
24. The intelligent switching node as described in claim 14, wherein the resource complex includes an intelligent peripheral that processes and acts on the information within a path of real call transmission.
25. The intelligent switching node as described in claim 23, wherein the intelligent peripheral processes the data in real time, using digital signal processing techniques.
26. The intelligent switching node as described in claim 14, which further includes an element for communicably linking the resource complex to a network management system by means of an operations link.
27. The intelligent switching node as described in claim 14, which further includes: a first connection for communicably linking the resource complex to a supporting portion of a direct link; and a second connection for communicably linking the processor with a signaling portion of the forward link.
The intelligent switching node as described in claim 14, which further comprises an element for monitoring and manipulating a plurality of processing subelements, the subelements being part of the plurality of functions.
29. The intelligent switching node as described in claim 14, which further includes: an element for monitoring the use of the plurality of functions, and an element for removing a selected function from the plurality of functions.
30. The intelligent switching node as described in claim 14, wherein the processor executes a virtual machine process that loads and executes a service layer execution environment.
The intelligent switching node as described in claim 29, wherein the service layer execution environment hosts a plurality of software instances that implement the service processing architecture, and are derived from a base class of managed objects.
32. The intelligent switching node as described in claim 30, wherein the plurality of instances of the software can be instantiated as necessary by a network management system, or during the processing of a service requested by a subscriber.
33. The intelligent switching node as described in claim 29, wherein the execution environment of the service layer controls the execution of a plurality of managed objects.
34. The intelligent switching node as described in claim 32, wherein the plurality of managed objects are encoded in an object-oriented language.
35. The intelligent switching node as described in claim 32, wherein the plurality of managed objects are encoded as Java byte codes.
36. The intelligent switching node as described in claim 29, wherein the execution environment of the service layer includes a managed object layer and a library layer.
37. The intelligent switching node as described in claim 29, wherein a set of operational rules is used to determine whether a specific managed object can be deployed and instantiated.
38. The intelligent switching node as described in claim 35, wherein the set of operating rules are encoded within the specific managed object.
39. The intelligent switching node as described in claim 35, wherein the set of operating rules are encoded in a network management system.
40. The intelligent switching node as described in claim 29, wherein the set of operational rules specify logical conditions during which the managed object is allowed or prohibited to deploy or instantiate.
41. The intelligent switching node as described in claim 29, wherein the set of operating rules specify the physical locations where the managed object is allowed or prohibited to be deployed or instantiated.
42. The intelligent switching node as described in claim 29, wherein the network management system resolves any conflicts that arise in the set of operating rules when the managed object is deployed or instanced.
43. The intelligent switching node as described in claim 29, wherein the service layer execution environment further includes: an element for deploying and removing a plurality of managed objects; an element for instantiating, interrogating, and destroying a plurality of instances of managed objects; an element to support the interaction and collaboration of the plurality of managed objects; an element to manage access to a native library; an element for communicating with an interface to a network management system, for receiving and implementing control signals; an element to assert a plurality of global values; and an element for controlling the start and stop of the plurality of managed objects.
44. The intelligent switching node as described in claim 29, wherein the execution environment of the service layer includes a base class of managed objects, including a service control class, a call control class, a kind of support control, and a resource proxy class.
45. The intelligent switching node as described in claim 42, wherein the service control class further includes: a session manager; a kind of support capacity; a kind of service arbitration; a class of characteristics; a class of service discriminator; a kind of service independent of the media; and a connection manager class.
46. The intelligent switching node as described in claim 42, wherein the call control class further includes an origination class and a termination class.
47. The intelligent switching node as described in claim 42, wherein the support control class adapts specific signals and events to and from the resource complex by means of the resource proxy.
48. The intelligent switching node as described in claim 42, wherein the resource proxy class interconnects the execution environment with a plurality of switching devices in a support network.
49. The intelligent switching node as described in claim 42, wherein the resource proxy class further includes: a telephone class; a kind of voice response unit; a trunk circuit class; and a modem class.
50. An intelligent communications network comprising: a plurality of intelligent network nodes distributed, each distributed intelligent network node having: an intelligent call processor having a processor for executing a plurality of functions within a logical platform, wherein at least one of the functions is the processing of services, at least one of the functions is the processing of calls, and at least one of the functions is the processing of the installation, and a complex of resources communicably linked with the intelligent call processor, and logically separated from the intelligent call processor; a network management system for monitoring and controlling a wide area network and the plurality of intelligent switching nodes; and interconnecting the wide area network to the plurality of distributed intelligent network nodes and to the network management system.
51. The smart network as described in claim 48, wherein the wide area network is a synchronous optical network.
52. The smart network as described in claim 48, wherein the wide area network uses a corridor architecture of common object requests.
53. The intelligent network as described in Claim 48, wherein the network management system operates within a structure that complies with the telecommunications management network.
54. The intelligent network as described in claim 48, wherein the network management system monitors and controls the plurality of nodes of the distributed intelligent network, in such a way that they can be distributed in a flexible and dynamic manner. services and functions through the plurality of distributed intelligent network nodes.
55. The smart network as described in claim 48, wherein the network management system controls the deployment of services, maintains the health of these services, provides information about these services, and provides management functions at the level of network.
56. The smart network as described in claim 48, wherein the network management system further includes: an element for accessing, controlling, and monitoring the services and hardware in the plurality of nodes of the intelligent network to through an agent functionality; an element to control the operation of a local operating system in each intelligent processor of calls, including the start and stop of a plurality of processes, the interrogation of the contents of a process table, and the state of the plurality of processes; an element to monitor the operation of intelligent call processors; an element to control the initialization and operation of the wide area network; and an element for controlling and supporting the instantiation and operation of a plurality of service layer execution environments that are executed in a single intelligent call processor.
57. The intelligent network as described in claim 48, wherein the plurality of functions can be moved to the intelligent call processor within any node of the distributed intelligent network without interrupting the operation of the intelligent call processor.
58. The intelligent network as described in claim 48, which also includes a Managed Object Creation Environment communicably linked to the network administration system to create, modify, test, and display functions that are to be used by the call processor.
59. The intelligent network as described in claim 56, wherein the Environment of Creation of Managed Object uses modular tools, in such a way that the service development work can be divided.
60. The intelligent network as described in claim 56, wherein the software of managed objects and the respective dependencies are created in the Managed Object Creation Environment.
61. The smart network as described in claim 56, wherein the Managed Object Creation Environment further includes a plurality of subcomponents that can be used to create services.
62. The intelligent network as described in claim 56, wherein the Managed Object Creation Environment also includes a unified collection of tools hosted in a single environment or user platform.
63. The intelligent network as described in claim 56, wherein the Managed Object Creation Environment also includes a collection of operations that are required throughout the process of creating the service, such as service documentation, definition of the managed object, definition of the interface, definition of the protocol, and definition of data entry, which are encapsulated in managed objects, and the service is tested.
64. A method to use a Managed Object Creation Environment, in order to update a software configuration, which comprises the steps: select a type of software component; select the software configuration; launch an appropriate software creation tool; use the appropriate software creation tool to modify or create one or more managed objects within the software configuration; compile one or more results from the appropriate software creation tool; and update the selected software configuration, based on the one or more results.
65. A method for using one or more managed objects in order to provide a service, which comprises the steps of: receiving an event within the execution environment of the service layer; determine the one or more managed objects needed to respond to the event; instantiate the one or more managed objects; and provide the service using the one or more instantiated managed objects.
66. A method to implement a new service using one or more managed objects, which comprises the steps of: creating the one or more managed objects; test the one or more managed objects in a test service layer execution environment; provided that the test of one or more managed objects is successful, perform integration tests by testing one or more managed objects with other managed objects and systems necessary to implement the new service; and whenever the integration test is successful, implement the new service by deploying the one or more managed objects to one or more intelligent call processors.
MXPA/A/2001/001277A 1998-08-05 2001-02-02 Method and system for an intelligent distributed network architecture MXPA01001277A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09128937 1998-08-05

Publications (1)

Publication Number Publication Date
MXPA01001277A true MXPA01001277A (en) 2001-12-13

Family

ID=

Similar Documents

Publication Publication Date Title
EP1103027B1 (en) Method and system for an intelligent distributed network architecture
US6393481B1 (en) Method and apparatus for providing real-time call processing services in an intelligent network
EP1131730B1 (en) Method and apparatus for providing real-time call processing services in an intelligent network
US6954525B2 (en) Method and apparatus for providing real-time execution of specific communications services in an intelligent network
US6425005B1 (en) Method and apparatus for managing local resources at service nodes in an intelligent network
US6804711B1 (en) Method and apparatus for managing call processing services in an intelligent telecommunication network
USRE43361E1 (en) Telecommunications system having separate switch intelligence and switch fabric
US20020085696A1 (en) Methods and apparatus for call service processing
US6260186B1 (en) Universal state machine for use with a concurrent state machine space in a telecommunications network
US6052455A (en) Universal data structure for use with a concurrent state machine space in a telecommunications network
US6690781B2 (en) Generic service component for telephony container server
US6122356A (en) Concurrent state machine space in a telecommunications network
MXPA01001277A (en) Method and system for an intelligent distributed network architecture
Sefidcon Feature interactions detection in intelligent networks