[go: up one dir, main page]

CA2338937A1 - Third party management platforms integration - Google Patents

Third party management platforms integration Download PDF

Info

Publication number
CA2338937A1
CA2338937A1 CA002338937A CA2338937A CA2338937A1 CA 2338937 A1 CA2338937 A1 CA 2338937A1 CA 002338937 A CA002338937 A CA 002338937A CA 2338937 A CA2338937 A CA 2338937A CA 2338937 A1 CA2338937 A1 CA 2338937A1
Authority
CA
Canada
Prior art keywords
objects
platform
data
interface
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002338937A
Other languages
French (fr)
Inventor
Steve Baker
Bob Vincent
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Crosskeys Systems Corp
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CA002212251A external-priority patent/CA2212251A1/en
Application filed by Individual filed Critical Individual
Priority to CA002338937A priority Critical patent/CA2338937A1/en
Publication of CA2338937A1 publication Critical patent/CA2338937A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q3/00Selecting arrangements
    • H04Q3/0016Arrangements providing connection between exchanges
    • H04Q3/0062Provisions for network management
    • H04Q3/0095Specification, development or application of network management software, e.g. software re-use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/02Standardisation; Integration
    • H04L41/0213Standardised network management protocols, e.g. simple network management protocol [SNMP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/02Standardisation; Integration
    • H04L41/022Multivendor or multi-standard integration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/02Standardisation; Integration
    • H04L41/0233Object-oriented techniques, for representation of network management data, e.g. common object request broker architecture [CORBA]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/04Network management architectures or arrangements
    • H04L41/052Network management architectures or arrangements using standardised network management architectures, e.g. telecommunication management network [TMN] or unified network management architecture [UNMA]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)

Abstract

In a method of importing data into a data processing system from a third party platform, for each physical object an interface object having a canonical form mappable to an underlying, externally invisible form in said data processing system is first created. The interface object has a set of permissible extensions mappable to said externally invisible form. New interface objects can be added by deriving new mappings from said extensions. The data processing system is thus decoupled from the third party platform so that the data processing system can easily interface with multivendor products.

Description

~: ., ~ ~ , ~~ -, ., .. ~ " ., . ~ _ ., , - .. n n r r ., .. , ~ r n ., O n if n n,, n w THIRD PARTY MANAGEMENT'PLATFORMS INTEGRATION
Field of the Invention This invention relates to a system for integrating the service management, for example of telecommunications networks, with third party management platforms.
Background of the Invention When information is to be exchanged between two systems, for example, between a system monitoring a network and a system monitoring the performance of the network relative to existing service agreements, one must typically agree on a common format and interface. In the case of multiple systems, one must either define a format/interface for each system or impose a least common denominator interface (LCD). The former is expensive to build and difficult to scale beyond a few systems. The latter is restrictive and leads to user dissatisfaction when key functionality is omitted from the interface. The LCD systems fails when a new system is encountered that has limited functionality and is incapable of supporting the LCD interface. In this case, it is necessary either to build a custom interface or adjust the LCD to omit functionality not supported by the new system.
A typical example of a system to which the invention is applicable is the ResoIveTMT"
system from Crosskeys Systems Corporation. This system addresses three primary areas of the Service Management level of the telecommunications management network (TMN) model: Performance Management, Configuration Management, and Fault Management.
The applications in these areas are designed to be integrated and work together in configurable combinations through the use of common core components.
.A customer has one or more services and one or more contracts with the service provider.
A contract includes a definition of the specified quality of service (as captured in a Service Level Agreement). A service profile defines a certain quality of service for a range of services. For example, a "Gold!' service profile may specify 99.995%
availability whereas a "Silver service profile may specify 99.9% availability.
Each Service is supported by one of more Service Components. A service component could be a physical piece of equipment such as a termination point a logical entity such as ~~~,~p~.D SHEET

,. ~ . , n , ~ , .. ., .. _ r. nn ,~ . ., .. a ~ ~ n n ~, . . n n ., ~ n ~ n n , . n n , ~, l .7 a 4 n n ~ ~ O
an end-to-end path or an arbitrary external system stich as a customer support desk. All service components support the notion of availability, and most support the production of performance measures.
This object model is maintained in the Service Management Information Base (SMIB).
The SMIB, is an operational data store. In addition to customer, contract and service information it contains current, volatile data (such as path states, and events), which may be used in real-time reports.
The ResolveT~'~~T" system also maintains two other major information bases, the Historical Information Base (HIB) and the Summarized Information Base (SIB).
The HIB contains detailed, time-varying data (such as path statistics, and old state and event data). This data is held in the HIB for a short period of time (typically 60 days).
This data can be used to produce detailed reports.
The SIB contains summarized information, based on the data in the HIB. Data in the SIB
may be up to 1 SO days old. Most relevant reports will be nm a'ainst the SIB, as its information is the most meaningful. The SIB information may be used to identify trends and patterns in usage and availability.
Availability and performance data is captured from a number of information sources and used to update the SMEB and the HIB. Periodically the HIB is summarized to produce one or more SIBS. Users can then define and produce reports, as well as configure evistin; and new services.
In order to provide product sen~ice management functions in the telecommunications domain, the ResolveTMT"' system must interface with a wide variety of network management systems. The capabilities and strengths of the third party management platforms vary. The range of applications and customers also vary from platform to platform. Simply aiming at a lowest common denominator approach to integration will lose many of the advantages of a particular platform. Many different interfaces must therefor be provided.
Nazim Agoulmine et al., "A system architecture for updating management information in hetereogenous networks", Communications for Global Users, including A
.,,_ ~~c~ SHEET
' ,,;.;,".

., -. ~ -.
~ ' , .a o ~ .
'7 ~ .) f. ." .1 .-~: 7 .', 1 .r . ~ n " ~ t7 n n ~ ~, n communications Theory Mini Conference, Orlando,,Dec 6-9,n1992, vol. 2, December 6-9, 1992 discloses a system that permits the updating of management information represented by managed objects within a Management Information Base.'This system does not, however, have the flexibility to work with different platforms without writing custom software.
Hunt R, "SSNP, SNMPV2 and CMIP - The Technologies of Multivendor Network Management, Computer Communications, vol. 20, no.2, March 1997, discloses a system for managing multivendor networks using proxy and translation procedures, which must be specially written for each network.
Summary of the Invention According to the present invention there is provided a method of importing data into a data processing system from a third party platform collecting data from physical objects, characterized in that an interface object having a canonical form and a set of extensions is created on said third party platform for each physical object represented in said data processing system, said canonical form is mapped to an underlyin<T, externally invisible form existing in said data processing system, said data is converted to said canonical form in said interface objects, said data is transferred through said mappings to said data processing system, and additional interface objects representing new physical objects are created usin' said extensions to derive new mappings for said additional objects, whereby said data processing system is decoupled from said third party platform.
There are thus three fundamental data formats in accordance with the invention. The imported data, which is present in the third party fornu, the intermediate format, which is of canonical form, and the target form, which is externally invisible. The externally invisible form is an abstract data format that supports arbitrary attributes on arbitrary objects. The collected data may, for example, be availability and performance information, which is used to perform a variety of service management tasks, but the invention is equally applicable to any data that needs to be imported into a data processing system that is adaptable to multiple vendor platforms.
~"~~tivCLD SHrcT

r CA 02338937 2001-O1-30 n a . . , , o r; r; ; , r;
. n. n n ~ ~~ n .. .. . . , ,~ , n n , , ~ " n ~ ~ , ~) p ,;, ~. n _, ,~

In accordance with the'invent''ion, if an existing interface object and associated mapping exists and it is desired to import something similar, but different, the mapping for a new interface object can be derived by expressing it as an extension of an existing mapping.
The derived mapping contains only the differences between itself and the original mapping (the one it is derived from). It is not necessary to define the mapping from scratch.
A platform in this context is any kind of underlying computer or data processing system on which platforms can run. Different platforms may have different operating systems and data formats.
A service model may also be exported to the third party management platform to 'enerate _. alarms and/or trouble ticket information relating to this object model.

~.~,vC~~ S~~~~

In the method according to the invention, a Gateway application is run on the management platform that will exchange information between the system and the platform.
The gateway interacts with the platforn using the locally specified application programming interfaces. From a user perspective, the gateway will appear, to the extent that it is visible at all, as another platform component, consistent in design and interface with all other components.
The interface is specified by an object model. A base object model is presented, and a range of possible extensions to that model are then outlined. Thus the interface can vary in complexity and sophistication depending on user/application need.
The design provides for system integrators to customize the behavior of the gateway, using a Turing complete language, to enable local modifications to be made quickly and easily.
The invention also provides a system a system for importing data into a data processing system from a third party platform collecting data from physical objects, comprising an interface object for each physical object, said interface object receiving said data and having a canonical form and having a set of pern~issible extensions mappable to an underlying, externally invisible form in said data processing system so as to decouple said data processing system from said third party platform and permit mappings for new interface objects to be derived as extensions of existing mappings.
Brief Description of the Drawings The invention will now be described in more detail, by way of example, only with reference to the accompanying drawings, in which:-Figure I shows a service component object;
Figure 2 shows the mapping between object models;
File 3 illustrates file format conversion;
Figure 4 shows as scan notification report;
Figure 5 shows the interfaces relating to the exchange of service information with a third party management platform;
Figure 6 shows a COBRA interface;
Figure 7 shows the feature architecture of an interface;
Figure 8 shows a network-interface operating environment;
Figure 9 shows the components of a network interface;
Figure 10 shows a two-step translation process;
Figure I 1 shows a multivendor parser; and Figure 12 shows an event collector.
Description of the Preferred Embodiments The invention will be described with respect to the ResolveTMT"' service level management agreement system referred to above. The objective is to provide a generic interface that permits the ResolveTM system to interface with multivendor third party platforms without the need to re-write internal software.
Figure 1 shows an interface object or service component that might exist on a third party network management platform. It has two halves, an abstract half 10 that is visible to the service management system and a physical half I2 that references the object in the service level management software that denotes the physical network facility.
Such a service component is intended to be applicable in a wide variety of cases. All concrete service components are viewed as particular types of this more general service component, which termed an Abstract Service Component (ASC).
The ASC object monitors the network object for changes in availability. The ASC object therefore acts as a form of proxy object. Changes are then reflected in the ASC (as a change in the operational state) and an event is forwarded to service management system.
For every ASC object there is exactly one network object.
To use a network object as a service component it must support some notion of operational availability. For objects that support such a notion in a standards compliant -S-manner (i.e. they have an Operational State attribute modeled according to X.721) the implementation of the ASC is simple.
However, it is possible that some network objects may use some other names for operational state or identify changes in availability by some other means. A
configurable mapping is thus provided between the operational state of the ASC and the equivalent state in the network object. This is achieved by using a mapping language to express the relationship between-the ASC and the network object.
An attempt to perfornl an action on the ASC is mapped into an action on the network object. This mapping is performed by embedding a flexible scripting language within the ASC. Actions on the ASC result in a script being executed. Thus a request to receive event state changes result in a script being executed that in turn gathers an event from the physical object and interprets it appropriately.
For example, suppose a network object representing a multiplexer modeled operational state as an attribute called "available". Suppose further that the attribute took the values "'yes" and "no", but no event was issued when the attribute was changed. It would be possible to associate with the request to receive operational state event changes from the ASC a script that polled the multiplexor object and changed the ASC's operational state attribute and issued an event change when appropriate. Thus ResolveTM can perform availability based reporting on an object that a) does not have an operational state and b) does not issue state change events.
The mapping between object models is show in more detail Figure 2. An arbitrary network object 20 on a third party management platform is mapped to an object 22 in the _ service level management system, in this case the ResolveTM Service Model. The mapping is achieved via an ASC 24. The ASC decouples the service level management system from the management platform without loss of functionality. This enables support for new management platforms to be added without requiring changes to the service level management system which would necessitate a new release and necessary longer development times.
As the ASC 24 runs on the remote platforni it must conform to the relevant platform API.
This implies that although the interface the ASC 24 presents to the service level -G-management system is constant, the implementation of the ASC will be necessarily platform specific.
In addition to the availability of network object, there may be additional, largely static, information that might be usefully passed to the service level management system, such as the Available Bit Rate of an ATM PVC. The attributes of the ASC object can be extended to include this additional information.
There are tlu-ee main options for the ASC:
~ Fixed number of fixed-name attributes Add a fixed number of attributes called attrihute value', crttrihute-name', attribute value . , attribute. name". The ASC would relate these general purpose variables to particular variables in the network object, and would use the attrihute-»a»re' attribute to store a meaningful presentation name to be used by the sen-ice level management system. These attributes would be of type to accommodate a wide variety of options.
* Variable number of fixed-name attributes Add an array of attribute/value pairs. This is a a variation on the arrangement above, but permits a variable number of attributes.
~ Variable number of variable-name attributes This is the most flexible option permitting user-specified additions to the object model.
Such additions could be performed by directly modifyin~ the specification of the object (in GDMO - Guidelines for the Definition of Managed Objects- , for example) or by using a platfoml specific tool that permits such operations. Note that this option requires that the ASC implementation and the ResolveTM interface be data driven.
It is possible to generalize the notion of an ASC, containing a number of arbitrary attributes, to one capable of collecting statistical information from the network management system. Statistical information is that information that is typically reported at a given interval (for example, every 15 minutes). Such information is typically quite large and issuing attribute change events quickly becomes unrealistic and therefore a more considered approach is required.
-7_ Performance information is traditionally presented in a flat file, usually consisting of ASCH data. More recently, the standards bodies have specified an approach using OSI-events and some vendors now support a native OSI interface to performance data.
In many cases the statistical information can be obtained directly in a flat file and this is preferred as it is simpler. Performance information is presented in a native format and must be converted into the format maintained by the service level management system as shown in Figure 3. The performance data attributes are based on the relevant standards (ATM Forum, Frame Relay Forum, etc), the same standards used by the equipment vendors. Thus conversion is largely a matter of accommodating differences in presentation, rather than in content.
In some cases a vendor may omit a standard parameter, supply a useful but non-standard performance parameter, or calculate a standard parameter in a non-standard way. In this case the conversion mechanism becomes more complex and a toolkit solution is required.
From the perspective of events and object models, the ASC provides a decoupling of the service level management system from the management platform. This decoupling is also desirable for importing performance data from file and therefore the conversion is performed in two stages. The native format is first mapped to a canonical form which is then read by a statistics importer. The canonical fornl is an abstract data format then supports arbitrary attributes on arbitrary, objects.
Thus, there need only be on statistics importer and adding support for a new native format does not require changes to ResoIveTM. The conversion to canonical fornl is typically a straightforward task and can be performed by a systems integrator, platform vendor or by _ CrossKeys. This conversion can also be applied to binary files which are often used as a more compact format for performance data. For ASCH files a wide variety of parsing/translation tools can be used, for example yacc/lex or CrossKeys' own parsing toolkit, AMPERE.
There may be cases where statistical information is available only via the network management system. One attractive approach is to use Q.822, as is done in CrossKeys Traffic Management applications (Altus~m). Q.822 provides a generic framework for collecting performance information. Objects that have attributes that change over time, _g_ are called monitored objects. To collect information from monitored objects at a certain reporting interval an object called a scanner is used (scanner objects are defined by X.738). A scanner object gathers many attribute values together and emits a Scan Report Notification that is, in essence, an optimized form of attribute value change event used when the number of attributes, and frequency of change, is high.
When information is available in the form of a Scan Report Notification it could either be sent directly to ResolveTM or converted into a flat file format before importing. The format of a Scan Report Notification is well-defined and therefore the need for a vendor-specific format conversion phase is removed.
A Scan Report Notification is an event and therefore the Abstract Service Component model can be used again, as shown in Figure 4. The ResolveTM Statistics Collector receives Scan Report Notifications from an arbitrary network component, via the ASC.
As the figure suggests, the fundamental mechanism for relaying events remains the same.
Scan Report Notification can be seen as a type of canonical form. Performance data received from the arbitrmy network component can be in any form, and it is then mapped to canonical form (Scan Report Notification) and then relayed to a statistics collector.
The intent of an USC is that it be general purpose. Thus if a network object model supports a termination point object and a link object, then both would be represented in ResolveT~ as an instance of an USC. Obtaining the operational state of the termination point object may be quite different from that of a link object. Thus in defining the mapping it is necessary to know to what type of object the ASC is pointing. In essence, type information is required to support polymorphic behavior of the ASC.
This type information is also of interest to ResolveTM. One of the value-added capabilities of ResolveTM is a suite of standard reports. Such reports are based on a detailed understanding of the technology. If the service component is of type "ATM VPC"
then a number of standard reports are available, for example.
With an ASC one cannot know a priori what technology it will be supporting and thus the possibility of value-added service disappears. However, by defining some essential characteristics of, say, an SDH Add-Drop Mux (ADM) then it would be possible to pre-define some SDH ADM reports. Such reports might require that infornlation about X, Y, Z be available. If such information is available it would be desirable if there was some way to tell ResolveTM that the ASC was in fact being used as a proxy for an SDH ADM
and that parameters X, Y and Z are available and that the standard reports were applicable.
One could envisage that as a number of third party applications were integrated then the possibility of developing a suite of standard reports for each technology type would increase.
Note however that the concept of type would need to be quite sophisticated. If a SDH
ADM supports parameters X, Y and Z then a suite of standard reports is possible. If a SDH ADM supports parameters UV,W,X,Y * and Z then a super-set of reports could be supported. If a SDH ADM supports parameters U, V, X and Z, but not Y, then some smaller combination of reports is possible.
Rather than develop a complex type system, the object ID (OID) of the network component object is used as an indication of type. An Object ID is an OSI-specified means of uniquely identifying an object class. All OSI object classes have a unique Object ID making it an excellent choice for distinguishing between classes of objects without inventing a complex type scheme. This OID can then be used to control both the polymorphic aspects of the mapping and as a way to identify to ResolveTM the capabilities of the network object and hence the extent to which preexisting reports would be applicable.
The ASC objects act as proxy objects for the real network objects and as such should not be directly visible to the end user. To make this possible, it would be desirable if each _ time a physical service component (PSC) object was created, an ASC would be automatically created. Similarly for object deletion.
It is simple for a systems integrator to write scripts that could auto-discover PSC objects and create the equivalent ASC objects. Such a script must be able to distinguish between different types of PSC objects so that appropriate mapping could be employed.
The use of an Abstract Service Component enables information to be gathered from an arbitrary network component object and incorporated within ResolveTM for the purposes of Service Management. An extended Abstract Service Component allows more complex inforniation to be exchanged, including performance data.
>>
By mapping the network object model to the Resolve ~ M object model in two stages (i.e.
first to an Abstract Service Component) the integration effort can proceed a greater rate.
No changes to ResolveTM are necessary and, by permitting the ASC to be modified without recompilation, the integrator can quickly bring ResolveTM's service management capabilities to a new type of network object.
The same event mechanism can be used to relay performance data in the form Scan Report Notifications. Such a solution is both standards based and compatible with CrossKeys Altus products.
In situations where performance data is available in tl~e form of a flat-file, an alternative approach is adopted. Performance data is first mapped to a canonical form. The ResolveT~ statistics collector then reads this canonical form, instead of the native format.
This permits support for new native formats to be added without requiring changes to ResolveTN~.
The two solutions to importing performance data can be seen to be logically equivalent, differing only in the exact definition of the canonical form (ASCH text in a flat file vs Scan Report Notification). This equivalence, and the previously noted relationship to the model shown in Figure 2, means that a single, uniform architecture can be employed to solve a range of integration issues.
All these mechanisms (which are essentially variations on a common solution) address a common objective; to support the integration of new inforn~ation sources without requiring continual change to ResolveTM. By avoiding changes to ResolveTM the integration with other platforms is de-coupled from the ResolveTM release cycle, allowing faster integration. By making this integration configurable and open, it is possible for it to be performed by third parties.
Another feature of the described system is the ability to exchange service information with the third party platform. This is achieved by ~ making ResolveTM's customer, service and service component objects visible on a third party platform, exporting ResolveTM-detected SLA violations into the third party platform's alarms and trouble tickets facilities, and, receiving trouble tickets as a means of supporting customer-perceived outages.
In one embodiment, the ResolveTM Object Model is exported in the form of a flat file, enabling third party platforms to incorporate this information in a variety of applications.
In a more sophisticated embodiment, an object model is defined on the remote platform (using GDMO, for example) and implementing a proxy-agent. Attempts to show the attributes of a service entity, for example, would result in a request being sent to ResolveTM to get the real attributes and then returning them to the platform.
If the proxy-agent did not cache any of the attribute values then there is no need to synchronize with ResolveTM with respect to ResoIveTM-initiated attribute value changes. It will, however, be necessary to synchronize with respect to object creation/deletion. . This could be done once on start-up and subsequently tracked via ResolveTM-generated object creation/deletion events.
When a SLA (Service Level Agreement) is violated a Quality of Service alarm is raised.
Such an alarm is raised against either the service or the service component.
The format of the alarm is well known and raising it on a third party platform is comparatively straightforward. .
It adds significant value to the SLA violation alarm if the network component that had caused the violation is identified. In cases where it is possible to identify the component, one could raise the alarm against the network component directly, or continue to raise the alarm against the service component but somehow include in the alarm the identity of the network component. In both cases it is necessary to identify the network component by name. This inforniation is stored in the ASC, and is therefore readily accessible.
To include such information in an OSI alarm is straightforward. It could be included in the Additional Information or Additional Text fields. The latter is simpler, but less flexible.

To raise the alarm against the network component itself is potentially more complicated.
Such an object will have a GDMO (or similar) specification that describes what events it is potentially capable of generating. If this description did IlOt include a QoS alarm, it would be necessary to augment the specification.
Exporting Trouble Tickets is similar to exporting alarnis in that it requires that the objects referenced in the TT (e.g. service) be known to the remote platform. TTs can be created by executing scripts directly on the platforn. These scripts can be user-defined, enabling ResolveTM to support a variety of creation schemes.
However, it is envisaged that the prime task of ResoIveTM is to raise alarms indicating actual or pending SLA violations. Whether these violations should result in a TT, and how the TT is populated, is usually a local decision influenced by local policy as much by technology, and therefore ResolveTM would not typically create TTs directly.
The intent here is to allow an external trouble ticketing systems to report changes in the availability of a service. When a customer reports a service outage he/she is unlikely to identify the failed service component; rather the customer would report that the service as a whole was unavailable. This could result in a trouble ticket indicating that the service is (perceived to be) unavailable. When ResoIveTN~ receives a trouble ticket, it will mark the corresponding service or service component to indicate a trouble ticket is active. The outages are calculated when the trouble tickets are closed.
Importing Trouble Ticket information into ResolveTM is performed by means of a CORBA-based importer. Thus from a remote platform perspective there are three tasks to perform.
~ Gather information about the creation/modification/deletion of Trouble Tickets, ~ Filter out TTs not relevant to ResolveTM, Export TTs in a format understood by ResolveTM
At a minimum the person creating the trouble ticket needs to know the names of all the services and service components (and by implication the names of the customers). Such inforniation is provided in the exported ResolveTM object model.

An application would need to run on the remote platform and gather object creation/delete and attribute value change events pertaining to trouble ticket objects. It is relatively straightforward to filter on the managedObjectlnstance attribute of the Trouble. Ticket and then use the capabilities of the ResolveTM Trouble Ticket importer to transmit the information to ResolveTM.
Exporting the object model as a flat-file, while simple, is always be a desirable feature as it permits the data to be used in arbitrary ways, from sophisticated network management applications to simply incorporating the data in a spreadsheet.
A more sophisticated level of interface is provided by implementing a proxy agent on the remote management platform. This will permit the ResolveTM objects to be used in the same way as any other object on the management platform, including being displayed on graphical maps or referenced in trouble tickets. Users of the management platform would be unaware that the objects were in fact managed by ResolveTM.
Importing trouble ticket information back into Resolves-M is performed via a predefined CORBA interface, permitting simple gateway applications to be developed on the management platform.
A suitable method for transmitting information between ResolveTM and the third party platform is to use a COBRA interface. Other methods, such as Q3, can be employed.
COBRA stands for Common Object Request Broker Architecture. CORBA offers the greatest chance of providing a uniform transport mechanism across a range of platforms and is therefore discussed in greater depth.
The CORBA interface can itself be broken down into the three possibilities shown in Figure G. A common intermediate representation will be used to exchange information as this provides a high level interface that will be consistent across all management platforms. Adding support for a new management platform requires only that the platform specific component be rewritten, enabling a new management platform to be added without a change to ResolveTM that would require a new release and unnecessarily delay integration.

Moreover, this interface can grow in scope making more the system functionality available, permitting more sophisticated applications to be developed on the third party management platform.
The interface with the third party platform will now be described in more detail. The Multi-Vendor, Multi-Platform Feature as defined has two basic groups of requirements;
~ Provide a capability to easily build network interfaces that provide support equivalent to existing network interfaces. This capability must be provided in such a way that it can be extended in the future to support additional network interface requirements These requirements are translated into a simple three-layered architecture as illustrated in Figure 7. The translation framework 30 provides the support for general purpose translation systems and is intended to address the second type of requirement, namely extensibility.
The translation library 32 is a suite of generic translations that aid the process of building network interfaces. It is possible to build network interfaces using only the translation framework 30, but there is still the opportunity to factor our generic work and thereby simplify the task of adding new interfaces.
The Vendor specific work 43 represents the additional configuration work that is necessary to build a network interface for a speciFc vendor's equipment. The vendor specific work is build upon the translation library, but as the Figure illustrates this is not necessary (although it is likely to involve more work).
A Network Interface (NI) performs the following tasks;
~ Event Collection: The collection of events, in near real-time or in batch mode, pertaining to network objects, translation into the ResolveTM representation and storage in the SMIB.
~ Object Synchronization: The uploading of objects (paths, path endpoints, etc.) from the element or network management system, conversion to the ResolveTM
representation and storage in the SM1B.

~ Statistics Importing: The importing of statistics for network objects, conversion to the ResolveTM representation and storage in a file for subsequent loading into the HIB.
A network communicates directly with the element/network management system to collect information and translate it to a standard form. The E/NMS
communicates this inforniation to the ResolveTM Server.
The behaviour of the network interface is highly E/NMS dependent. Support for configuration is provided in the form of "mappings" which define how the network interface should translate, or map, information from the E/NMS to ResolveTM.
The network interface will typically make use of an E/NMS library of routines to access vendor/platfonn specific data. The mapping then perforn~s the conversion of native information and relays the information back to the ResolveTM Server.
Figure 8 illustrates the idea by showing the flow of information from the network element 40 through the E/NMS 42 into the network interface 44 where it is then translated.
Figure 9 shows the relationship between the network interface shown in Figure 8 and the translation framework by illustrating the components that are used to construct a specific interface.
The Multi-Vendor, Multi-Platform feature supplies;
~ the "Translation Framework"
~ a library of vendor independent mappings (described belowe) ~ A generic interface to E/NMSs that present data in flat-files ~ A skeletal event collector with the ability to dynamically load vendor specific event collectors.
Using this approach it is possible to build a configurable multivendor network interface that can perform event collection, object synchronization and statistics collection. Such a network interface is then configured to meet the specific needs of E/ g. To build a specific network interface (e.g. to Ascend Frame Relay equipment) typically requires;
~ some vendor specific mappings - IG-~ possibly an E/NMS interface (if communicating via flat-files is not enough) ~ some additional simple translation utilities (if the flat-file library is to be used) A further explanation of mappings, and how they can be used to perform event collection, object synchronization and statistics collection, is given below.
Figure 10 illustrates the two-step approach to interfacing with a third party platform in accordance with the invention. The native format, be it file-based, or otherwise is converted in an intermediate format. This translation is primarily syntactic in nature.
This intermediate format is then translated, by a "mapping", to the target format. This translation is typically concerned with semantic translation.
As the native format is clearly E/NMS specific and therefore so is the translator that translates this to intermediate forniat. The mappings are also E/NMS specific (although not as much as might be imagined at first) and therefore the construction of a network interface to a new type of E/NMS will typically require the creation of a translator and some mappings.
The rationale for a two-step translation is as follows;
I . Two forn~s of translation must be performed - syntactical and semantic.
Syntactical transformation is much simpler and thus it is desirable to retain this simplicity by separating it from the complexity of the semantic transformation.
2. Two-step translation is a tried and trusted approach in compilation theory.
The translation required for MV interfacing is a simplified form of the same problem faced by a compiler.
E/NMS data is imported in a two stage process. E/NMS data is first converted into a technology independent file format, called the ResolveTM Object Format, and then it is mapped to a ResolveTM object.
The ResolveTM Object Format consists of ResolveTM Canonical Objects (RCO).
Each RCO belongs to a named class, and has a (possibly empty) list of attributes, called ResolveTM Canonical Attributes (RCAs) and a (possibly empty) list of contained RCOs.

A RCA has a name and a value. No two_RCAs in the same RCO may have the same name.
Nested RCOs may be used to express containment relationships. A nested RCO
inherits the attributes of its containing RCO. Attribute overnding is possible, with the most deeply nested value taking precedence. The effect is similar to the lexical scoping rules employed in most programming languages.
RCOs and RCAs may be represented in textual form, in the form of a ROF file.
The definition of a ROF file, using an Extended BNF notation, is as follows;
file .._ <header> { <RCO> } *
header ..= ROFV 1.0 RCO :.= class <name> '(' {<RCA> J * ( <RCO> f * ')' RCA :._ <name> _ <value> ;
name .._ <string>
value :._ <integer> ~ <real> ~ <string>
C++ style // comments may be used in a ROF file.
A ROF File is a header followed by sequence of zero or more ResolveTM
Canonical Objects (RCOs).
The class name of an RCO is used to identify a mupping. A mapping is an object that embodies the required semantic transformation of an RCO to a given format.
Note that there is a class name, but no instance name. _ The following examples illustrate some of the concepts identified above.
Thus the following is illegal because attr-ibutel is defined twice;
ROFV 1.0 class A {
attributel = "foo";
attributel = 2; // illegal - redefinition of attribute1 The following example shows a legal example of a nested RCO and attribute) will have the value of bar when the class B mapping is executed;
ROFV1.0 class A {
attribute1 = 1;
attribute2 = 3;
class B {
attribute) _ "bar"; /I legal - overrides previous definition.
attribute2 = 2; // also legal Note that two RCOs may have the same class name (in tact this is quite commonj, but they are not required to have the same RCAs. Thus the following is legal;
ROFV1.0 class A {
name = foo;
attribute1 = 1;
class A {
name = bar;
attribute2 = 1;
There is no specific mention of time in the ROF syntax. However time information can be readily included. The advantage of not including time as mandatory ROF
syntax is that it does not force a particular solution on to the mapping implementer. A
common technique is to take the time from the system clock so a mandatory requirement to include time information would be unnecessarily restrictive.
The disadvantage of not including time in the ROF syntax is that puts the onus on the mappings to check for a syntactically correct time. However, this can be mitigated by use of inheritance in the mapping implementation language which would permit such validation and subsequent processing to be implemented once and re-used many times.
In the following example the object with a class name of fi-slats implicitly has an attribute called timestamp.
ROFV 1.0 class time {
timestamp = "1997-11-11 12:30:05";
class fr slats {
a=1;
b=2;
class fr slats {
a=2;
b=3;
From the perspective of the fi- stars class, the following example is equivalent to the previous example;
ROFV 1.0 class fr slats {
timestamp = "1997-11-11 12:30:05";
a = 1; _ b=2;
class fr slats {
timestamp = "1997-11-11 12:30:05";
a=2;
b=3;

The ROF format outlined in the previous section captures only the syntactic aspects of translation. It is easy to imagine how a file from a E/NMS may be (syntactically) transformed into a ROF file, but it is less clear what purpose this might serve as the more challenging problem of semantic translation is not addressed.
Semantic translation is performed by objects called mappings. The class name in an RCO
identifies the mapping to be used on it. A mapping is applied to an RCO. The following example illustrates how mappings are related to RCOs;
ROFV1.0 class fr slats {
filename = "stats1.dat";
a=1;
b=2;
crass ATM slats {
filename = "stats2.dat";
X = 3;
Y = 4;
This can be interpreted as follows - find a mapping called fi-slats and execute it, passing as parameters the values of the RCAs of the first RCO (namely filenarne, a and b.) And then, find a mapping called ATM slats and execute it, passing as parameters the values of the RCAs of the second RCO (namely filename, X and l~.
Note that mappings are applied to individual RCOs, not to complete ROF Files.
A ROF
File is a way of defining a collection of RCOs, each of which in turn identifies the mapping that should be applied to it.
Quite what the mapping actually does is not important (to this explanation).
It would be possible to write a mapping called fr-slats that took the values of a and b and added them together and wrote the result to a file called "stats.dat" (i.e. the value offilename).
Thus the implementation of a mapping becomes the means by which semantic translation is expressed. As will be discussed in later sections, mappings are implemented in a t~1'O 99/07109 PCT/CA98/00738 conventional programming language (C++ and Java in this release) and may be as simple or as complex as is needed to express the required translation.
The Multi-Vendor Parser is a particular use of the translation framework that will permit the development of network interfaces where data are presented in the form of flat-files.
The following figure illustrates the components of the MV parser (shown inside the dotted line). It is a specific application of the translation framework shown in Figure 7.
The parser is used in conjunction with a library of vendor independent mappings and some vendor specific mappings.
The multivendor Parser is shown in Figure 11. The MV Parser reads a ROF file (as defined above) and invokes the appropriate mapping for each of the resulting RCOs.
The structure of a ROF file, indeed almost all E/NMS data files, is such that an attempt at en-or recovery (for example, skipping until the next "class") could result in incorrectly set infornlation. The advantages of error recovery are outweighed by the impact of reporting data inaccurately as a result of incorrect recovery.
The Parser is typically invoked from the command line, or scheduled to nm at pre-determined intervals, and it is given the name of a file to be parsed. If the ROF file is successfully parsed the MV parser proceeds as follows;
1. The MV Parser constructs a RCO whose class name is MappirrgStart, and adds attributes for each of the attributes present in the MV parser configuration file, or passed on the command line. The MV parser then attempts to invoke a mapping named MappingStart.
2. The MV Parser performs a depth-first traversal of the RCOs such that each RCO is visited in the order in which it is first encountered in the input file. For each RCO the engine attempts to locate a C++ implementation of the mapping with the same name as the class name of the RCO. If no C++ implementation is found, an attempt is made to locate a Java implementation. If no implementation is found the engine skips to the next RCO.
3. Once a mapping has been loaded the MV Parser invokes the runMapping operation on the mapping object, passing as parameters the RCAs of the RCO in question.

4. Once all RCOs have been visited the MV Parser construct a RCO whose class name is MappingEnd and attempts to invoke a mapping of the same name.
5. The MV Parser exits.
It is normally acceptable for there to be no mapping of a given name. If a mapping is not found for a given RCO the MV parser proceeds to the next RCO. It is possible to request that an error message is emitted by changing a configuration option but this merely generates an error message; it does not prevent the MV parser from proceeding to the next RCO.
The following example illustrates the order of mapping execution;
RoFVi .o class One {
class Two {}
class Three {
class Four {}
}
class Five {}
The mappings would be executed in the order MappingStart, One, Two, Three, Four, Five, MappingEnd.
The MV Parser accepts the following command line arguments. As with other ResolveT~~
modules, all values (except -ptag) can be set in a configuration file. If a command line argument/configuration parameter is set incon-ectly the MV Parser will terminate immediately.
-ptag <tag>
This flag is used in the same manner as other ResolveT~'~ modules ~ -file <filename>
Identifies the file to be parsed. This argument is required. Only one file may be specified.
~ -check only [true ~ false]

If set to true the parser only checks the input for validity as per section;
no mappings are executed and no mapping engine is created. The parser terminates after printing a message that indicates whether or not the file was valid. This argument is optional. The default is false which means that the parser does not print such a message and proceeds to apply mappings as described in section.
~ -version [true ~ false]
If set to true the parser prints its version number and the date it was last built. This argument is optional. The default is false.
~ -start_mappW g <name>
-end mapping <name>
The name of the start and end mappings. Both are optional. If omitted they default to MappirrgStar-t and MappingEnrl respectively.
~ -loadjava [true ~ false) If set to the false then the Java Mapping Engine is not used, and therefore only C++
implementation of mappings will be permitted. This argument is optional. The default rs true.
~ -parse_failure [ informational ~ serious ~ fatal ~ operator ~ none]
Defines the severity of the error message if an input file fails to parse.
This argument is optional. The default is informational. If set to none, no error message will be emitted in the event of a parse failure.
~ -missing-mapping [ info ~ warning ~ serious ~ fatal ~ operator ~ none) Def nes the severity of the en-or message if a mapping is not located. This argument is optional. The default is none which indicates no error message will be emitted if a mapping cannot be located.
~ -classpath <classpath>
This is used by the Java Virtual Machine (JVM). If it is not set uses the $CLASSPATH
environment variable. If $CLASSPATH is not set it uses $RESOLVETMHOME/java.
If $RESOLVETMHOME is not set that the JVM cannot be initialized, an error message (severity serious) is logged. and no Java mappings can be executed.
This option is ignored if loadjava is set to false (see above).
~ -dispatching class <name>
-dispatching method <name>
-dispatchinglpackage <name>
The name of the dispatching class and dispatching method used to load Java classes. The classes are loaded from the package specified by -dispatching-package. These flags are optional and are intended to be used only by developers of the translation framework. Users of the framework would not normally use these flags.
These options are ignored if loadjava is set to false (see above).
Note that by convention ROF files have a ".roF' extension, although this is not enforced by the parser.
The Multi-Vendor Event Collector is a particular application of the translation framework to the specific task of event collection.
Some E/NMSs emit event inforniation in a flat-file form and therefore this information may be processed by the MV parser (with the appropriate mappings, of course).
However it is more common to collect inforniation through some sort of API, and thus an alternative approach is required.
Figure 12 illustrates the components that make up the MV Event Collector (inside the dotted line) and those that must be added to it (outside the dotted line);
The MV Event Collector contains fewer standard components than the MV Parser as the "south bound" interface is not known (contrasts with the parser where it is known to be a ROF file).
Recall that mappings act upon RCOs. RCOs are created by the MV parser, but it is possible to create them directly, without recourse to flat-file. It is therefore possible to utilize mappings and the translation framework for E/NMSs that present information in formats other than flat-file.

Event collectors can be started, stopped_and requested to perform synchronization operations in a consistent manner. If an event collector "dies" a director process detects this and will restart it.
The MV Event Collector supports this same interaction (i.e RCI interface) ensuring that all new mufti-vendor event collectors appear to the rest of ResolveT~~ (and hence to the users of ResolveTM) to be indistinguishable from existing 46020 event collectors.
Upon start-up the MV Event Collector performs the following steps.
1. The MV EventCollector loads a vendor specific library and invokes a start function in the library. The intended behaviour of this function is to connect to the E/NMS and collect event related information. Upon receipt of some event information the vendor specific collector would typically create an RCO(with associated RCAs) and request that a mapping be invoked on the RCO.
A prototypical vendor specific event collector function would be as follows;
begin connect to EINMS
if sync on start required then do sync end if while not stopped case getCollectorState() sync do sync event request event info create RCO
add RCAs containing information from event info invoke mapping on RCO
shutdown:
stopped = true;
end case end while disconnect from EINMS
end Note that getCoIlectorState() is periodically called in the loop. This indicates whether the vendor specific event collector should be collecting events, synchronizing or shutting down. If the MV Event Collector is suspended then the getCollectorState() function is blocked until the MV Event Collector resumes. In the interests of efficiency the vendor specific event collector should not call this function too often, once every second or so is adequate.
2. In a separate thread the MV Event Collector listens for RCI commands. After receipt of a sync message the MV Event Collector will return 'sync' the next time getCollectorState() is called. Upon receipt of a suspend message the MV Event Collector will block the collector thread next time getCollectorState() is called. The collector thread will remained blocked until an unsuspend or shutdown message is received. Upon receipt of a shutdown message the MV Event Collector will ensure that the next call to getCollectorState() returns 'stopped'. The collector thread will be given up to 30 seconds to terminate, before it will be killed and the MV Event Collector exits.
The vendor specific event collector thread functions are clearly E/NMS
specific. Such functions are written in C (or C++ with C-style external linkage) and are kept in a shared library. The MV Event Collector will dynamically load this shared library at run time.
The MV Event Collector accepts the following command line arguments. As with other ResoIveTM modules, all values (except -ptag) can be set in a configuration file.
~ -ptag <tag>
This flag is used in the same manner as other ResolveT~~ modules ~ -pmode [pmsync~ pmevent]
This is the same as for the 46020 Event Collectors. If omitted the mode defaults to pmevent.

~ -version [true ~ false]
If set to true the MV Event Collector prints its version number and the date it was last built. This argument is optional. The default is false.
~ -loadjava [true ~ false) If set to the false then support for Java is disabled, and therefore only C++
implementation of mappings will be permitted. This argument is optional. The default is true.
~ -library <shared library>
Identifies the shared library to be loaded that contains the implementation of the start, sync and stop functions. Shared libraries are loaded using dlopen and therefore follow the search path as defined for that command {see man dlopen for details). This argument is mandatory. If the argument is not specified or missing the MV
Event Collector will shutdown immediately with an error.
~ -startFunction <function name>
Identifies the name of the start function for the vendor specific event collector. The function must have the parameter list(RWBoolean, MV-EcProcManProxy*, CK Config*) and return type int. The function is found using dlsym( sees man dlsym for details). This argument is mandatory and if it is missing or not found the MV
Event Collector will shutdown immediately with an error.
~ -classpath <classpath>
This is used by the 3ava Virtual Machine (JVM). If it is not set uses the $CLASSPATH
environment variable. If $CLASSPATH is not set it uses $RESOLVETMHOME/java.
If $RESOLVETMHOME is not set that the JVM cannot be initialized, an error message (severity serious) is logged and no Java mappings can be executed.
This option is ignored if loadjava is set to false (see above).
~ -dispatching-class <name>
-dispatching_method <name>
-dispatching_package <name>

The name of the dispatching class and dispatching method used to load Java classes. The classes are loaded from the package specified by -dispatching-package. These flags are intended to be used only by developers of the translation framework. Users of the framework would not normally use these flags.
These options are ignored if loadjava is set to false (see above).
Specific Example Statistics are currently imported into the ResolveTM HIB in a format know as ckload. A
MV statistics importer is a translator that can convert from a the E/NMS
format into ckload format.
In keeping with the two stage process the E/NMS data is first translated to a ROF file (syntactic translation) and then some mappings (semantic translation) applied to generate ckload f les.
Suppose the E/NMS data were as follows;
3,123,23,1 1,22 Suppose further that the fields refer to the nrnsID, circuit termination point ID, number of transmitted bytes (of two types) and number of received bytes respectively. It would be trival (in this simple example) to translate that into a ROF equivalent. For example;
ROFV 1.0 class ckload {
filename = SI f~ctp_1998 01-122315; -class fr-ctp stats {
nmslD = 3;
ctpid = 123;
TxBytes typeA = 23;
TxBytes typeB = 11;
RxBytes = 22;

Now mappings might be written as follows;
~ ckload The ckload mapping takes the value of the filename attribute and opens a file of that name.
~ fr ctp_stats The fr ctp scats mappings takes the values of its attributes and writes them in a ckload forniat to the file previously opened. Now the ekload format understands TxBytes, but not the concept of two different types. Thus the mapping would take the values of TxBytes typeA and TxBytes typeB and add them together and write out the result. This is a form of semantic translation.
The advantage of such techniques is that format of ckload files is fixed and therefore it is possible to define a set of standard mappings. Thus to translate a new vendor's statistics it is only necessary to translate to a target ROF format and the standard mapping will then perform the necessary translation. This reduces the task of developing a "Vendor X"
statistics importer to building a syntax translator only.
In cases where syntax translation is not sufficient it is possible to extend the standard mappings by means of inheritance (for mappings are implemented in an 00 language).
For example, suppose that Cascade FR LJNI statistics were identical in all respects to FR
UNI statistics as understood by ResolveTM, with the sole exception of the calculation of the entity ID. It would be possible to write a new mapping (e.g.
cascade_fr_uni stats) that inherited the behaviour of the standard mapping, but over-rode the translation of the entity ID attribute. The developer of this mapping does not have to re-state the standard translation, only where the required translation is different.
If an E/NMS makes synchronization and/or event information in the form of an ASCII
file the MV Parser can be used to perform the necessary synchronization/event propagation.
Again, this requires the development of mapping objects that makes CORBA calls to the Service Inforniation API. For example, the following excerpt from a ROF file suggests how event information could be recorded;

WO 99/07109 PCT/CA9$/00738 ROF 1.0 class avc event {
,..
service component type = FR PVC;
nmslD = 101;
name = pvc123;
op state = enabled;
class avc event {
service component type = FR_PVC;
nmslD = 101;
name = pvc124;
admin state = locked;
Given such a file it is trivial to develop a mapping that would use this information to make CORBA calls to set the administrative/operational state of the specified service component. Such a mapping would be applicable to all service component types and would therefore need only be developed once. (This is the advantage of the two-step approach to translation).
In cases where an E/NMS provides event information through an API, and not through a flat-file, the above approach outlined in B.2 is still applicable. Recall that mappings work on RCOs, and that RCOs may be created directly. Thus it would be possible to collect event information from a vendor specific event collector and then create an RCO. For example, an RCO avc event, with RCAs called service co~r~ponent type, nnts_id, will be created and mapped without a flat file;
ROFObject objectName("avc event" );
ROFAttribute *attribPtr=NULL;
while more attributes switch (attribute type) case service component type:
attribPtr = new ROFAttribute("service component type",attribute value);
break;
case nms id:
attribPtr = new ROFAttribute("nms id",attribute value);
break;
objectName.addAttribute(attribPtr);
end while runMapping(objectName);
The following discussion outlines the steps that would typically be performed to configure the translation framework for a specific E/NMS. It also includes some recommendations on how this configuration should be approached.
Building a network interface involves two separate translation steps - syntax translation and semantic translation. Both steps pivot around the central notion of an intermediate (ROF-based) representation and therefore choosing this representation is very important.
The main factor that will govern the choice of interniediate representation is an awareness of the library of pre-defined mappings specified. These pre-defined mappings have been specifically designed to ease the task of building network interfaces and it is anticipated that a new network interface would take these mappings as a starting point and then only derive vendor specific differences. Each of these mappings defines the required intermediate format. For example, suppose that there is a mapping that accepts RCOs in the following form;

class dummy {
dummyX = <number>;
dummyY = <number>;
Suppose further that the semantics of this mapping are appropriate for a new network interface, except that there is an extra number to consider, Z. The new intermediate format might then be;
class dummyPlus {
dummyX = <number>;
dummyY = <number>;
extraZ = <number>
By choosing this representation it is possible to implement a mapping for clummyPlus that will inherit the functionality of dumyr. The durmnv mapping will still work because the names of the attributes are preserved.
Thus in this example, re-use of a mapping determined the intermediate format.
If an existing mapping cannot be re-used then the choice of intermediate format becomes more open. However, the following guidelines should be considered;
~ Avoid objects with a large number of attributes.
The time taken to execute a mapping is influenced by the number of attributes (including inherited). An RCO with 30 attributes, say, would suggest that the object in question is too big .
~ Remember that the MV parser runs to completion and then exits Therefore if some mappings need to build intermediate data structures, remember that they will be rebuilt every time the parser is executed.
The syntax translator takes information in a native format and translate it to the intermediate format.

The main goal of this syntax translator is to perform syntactical translation.
More complex {semantic) transforn~ is to be avoided. For the following reasons;
~ Right tool for the right job The mappings were designed to perform semantic translation. Numerous tools exist for doing syntactical translations...
~ Limit the impact of syntactic changes ~ It is common for two version of the same network management system to output the same information in slightly different format. By restricting the syntax translation to syntax only it is necessary to develop two syntax translators but keep only one semantic translator. Had the syntax translation taken on more semantic transformation it would be difficult to achieve this level of separation.
If the native format is obtained through some API then the implementation approach is largely determined by the API.
If the native format is some sort of file forn~at then a wide variety of tools are available.
Perl, awk, yacc/lex, etc., all provide reasonable approaches to performing syntactic translation.
The above described method permits an application, such as a service level management application, to interface with multiple vendor platforms without the need to tailor the underlying application to each vendor platform.

Claims (13)

Claims:
1. A method of importing data into a data processing system from a third party platform collecting data from physical objects, characterized in that an interface object having a canonical form and a set of extensions is created on said third party platform for each physical object represented in said data processing system, said canonical form is mapped to an underlying, externally invisible form existing in said data processing system, said data is converted to said canonical form in said interface objects, said data is transferred through said mappings to said data processing system, and additional interface objects representing new physical objects are created using said extensions to derive new mappings for said additional objects, whereby said data processing system is decoupled from said third party platform.
2. A method as claimed in claim 1, wherein said canonical form is an abstract data format that supports arbitrary attributes on arbitrary objects.
3. A method as claimed in claim 1 or 2, wherein said imported data is availability and performance information.
4. A method as claimed any one of claims 1 to 3, wherein a service model object is also exported to the third party management platform to generate alarms and/or trouble ticket information related to the object model.
5. A method as claimed in any one of claims 1 to 4, wherein a gateway application exchanges data between the system and the platform.
6. A method as claimed in claim 5, wherein said gateway application is run on said third party platform.
7. A method as claimed in claim 5 or 6, wherein said gateway interacts with the user platform using locally specified application programming interfaces.
8. A method as claimed in any one of claims 1 to 7, wherein said data processing system contains platform objects representing physical objects in said third party platform, and actions performed on said interface object are mapped onto said platform objects.
9. A method as claimed in claim 8, wherein said mapping of said actions into said platform objects is performed with a Turing complete language.
10. A method as claimed in claim 9, wherein said third party platform is a network management system and said platform objects are network objects.
11. A method as claimed in claim 5, wherein said service model object is exported as a flat file.
12. A method as claimed in any one of claims 1 to 12, wherein two forms of mapping are performed, namely syntactical and semantic.
13. A method as claimed in any one of claims 1 to 13, wherein said interface objects are created automatically as physical objects are added to the system.
CA002338937A 1997-07-31 1998-07-31 Third party management platforms integration Abandoned CA2338937A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA002338937A CA2338937A1 (en) 1997-07-31 1998-07-31 Third party management platforms integration

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CA2,212,251 1997-07-31
CA002212251A CA2212251A1 (en) 1997-07-31 1997-07-31 Resolve (tm) gateway third party management platforms integration
CA002338937A CA2338937A1 (en) 1997-07-31 1998-07-31 Third party management platforms integration
PCT/CA1998/000738 WO1999007109A1 (en) 1997-07-31 1998-07-31 Third party management platforms integration

Publications (1)

Publication Number Publication Date
CA2338937A1 true CA2338937A1 (en) 1999-02-11

Family

ID=25679521

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002338937A Abandoned CA2338937A1 (en) 1997-07-31 1998-07-31 Third party management platforms integration

Country Status (1)

Country Link
CA (1) CA2338937A1 (en)

Similar Documents

Publication Publication Date Title
US6260062B1 (en) Element management system for heterogeneous telecommunications network
US7555743B2 (en) SNMP agent code generation and SNMP agent framework for network management application development
US7370105B2 (en) Quality of service control, particularly for telecommunication
US7987472B2 (en) Computerized network administration of remote computer resources
US6349333B1 (en) Platform independent alarm service for manipulating managed objects in a distributed network management system
US20030115305A1 (en) Command line interface processor with dynamic update of attribute dependencies
US20050278693A1 (en) Distribution adaptor for network management application development
US20030115308A1 (en) Network management system architecture
JP2000156680A (en) Network element management method
Haggerty et al. The benefits of CORBA-based network management
US20060004856A1 (en) Data management and persistence frameworks for network management application development
WO1999007109A1 (en) Third party management platforms integration
CA2338937A1 (en) Third party management platforms integration
Raman CMISE functions and services
Festor et al. Integration of WBEM-based Management Agents in the OSI Framework
Inamori et al. Applying TMN to a distributed communications node system with common platform software
Klerer System management information modeling
Taylor Distributed systems management architectures
Goers et al. Implementing a management system architecture framework
Asensio et al. Experiences with the SNMP-based integrated management of a CORBA-based electronic commerce application
Chadha et al. Incorporating manageability into distributed software
Wegdam et al. ORB Instrumentation for Management of CORBA.
Choy et al. Reusable management frameworks for third‐generation wireless networks
KR20020057781A (en) The operator interface system for network management
Tan et al. Implementation of OSI management for SDH networks

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued