[go: up one dir, main page]

WO2025074407A1 - Method and system for counters and key performance indicators (kpis) policy management in a network - Google Patents

Method and system for counters and key performance indicators (kpis) policy management in a network Download PDF

Info

Publication number
WO2025074407A1
WO2025074407A1 PCT/IN2024/051966 IN2024051966W WO2025074407A1 WO 2025074407 A1 WO2025074407 A1 WO 2025074407A1 IN 2024051966 W IN2024051966 W IN 2024051966W WO 2025074407 A1 WO2025074407 A1 WO 2025074407A1
Authority
WO
WIPO (PCT)
Prior art keywords
breach
ipm
kpis
counters
policies
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IN2024/051966
Other languages
French (fr)
Inventor
Aayush Bhatnagar
Ankit Murarka
Jugal Kishore
Gaurav Kumar
Kishan Sahu
Rahul Kumar
Sunil Meena
Gourav Gurbani
Sanjana Chaudhary
Chandra GANVEER
Supriya Kaushik DE
Debashish Kumar
Mehul Tilala
Dharmendra Kumar Vishwakarma
Yogesh Kumar
Niharika PATNAM
Harshita GARG
Avinash Kushwaha
Sajal Soni
Srinath KALKIVAYI
Vitap Pandey
Manasvi Rajani
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jio Platforms Ltd
Original Assignee
Jio Platforms Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jio Platforms Ltd filed Critical Jio Platforms Ltd
Publication of WO2025074407A1 publication Critical patent/WO2025074407A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0894Policy-based network configuration management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5009Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/16Threshold monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/20Arrangements for monitoring or testing data switching networks the monitoring system or the monitored elements being virtualised, abstracted or software-defined entities, e.g. SDN or NFV

Definitions

  • Embodiments of the present disclosure generally relate to network management systems. More particularly, embodiments of the present disclosure relate to counters and key performance indicators (KPIs) policy management in a network.
  • KPIs key performance indicators
  • Network performance management systems typically track network elements and data from network monitoring tools and combine and process such data to determine key performance indicators (KPI) of the network.
  • KPI key performance indicators
  • Integrated performance management systems provide the means to visualize the network performance data so that network operators and other relevant stakeholders are able to identify the service quality of the overall network, and individual/ grouped network elements. By having an overall as well as detailed view of the network performance, the network operators can detect, diagnose and remedy actual service issues, as well as predict potential service issues or failures in the network and take precautionary measures accordingly.
  • a network node or network element such as a base station, an access point (AP), a router, etc. collects event statistics in the form of performance counters and sends them to network performance management system for diagnostic purposes. These performance counters may be logged and maintained by the management system in order to assess the performance of network nodes. In order to catch the abnormalities, the user would need to check the reports on regular basis. These results were also prone to human-error. Thus, there is a need in the art to help the user by reducing the grunt work and automating the tasks which need to be performed after having observed any kind of breaches. [0005] Also, KPI values act as metrics for some real-world problems.
  • the current KPI values are analysed and compared with the past values for getting the trend in terms of the absolute change as well as the percentage change.
  • the people, who perform the monitoring and observation tasks, take note of every kind of changes happening in the KPIs they are held responsible for. Normally, user would download an excel report from a dashboard page and perform some calculations in excel to get the increment or decrement or none type changes for the date he/she has chosen.
  • Graph can be used to visualize the ups and downs in KPIs.
  • An aspect of the present disclosure may relate to a method for counters and key performance indicators (KPIs) policy management in a network.
  • the method comprises transmitting, by a transceiver unit, from a cron scheduler a request for execution of one or more policies at a pre-defined interval to an integrated performance management (IPM) module.
  • the method further comprises receiving, by the transceiver unit, at the IPM module, a request for a report comprising a set of counters and a set of KPIs.
  • the method comprises identifying, by an identification unit, at the IPM module, a set of policies from the one or more policies comprising the set of counters and the set of KPIs.
  • the method comprises evaluating, by an evaluation unit, at the IPM module, the set of policies comprising the set of counters and the set of KPIs based on a set of severity breach thresholds.
  • the method comprises identifying, by the identification unit, at the IPM module, a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on the set of severity breach thresholds.
  • the method comprises generating, by a report generation unit, at the IPM, one or more reports comprising the set of breach conditions, wherein the breach conditions are calibrated based on the severity breach thresholds.
  • the method further comprises sending, by the transceiver unit, from the IPM module, the one or more reports to one or more users based on the set of policies.
  • the method prior to transmitting the request for execution of one or more policies from the cron scheduler to the IPM, the method comprises creating, at a user interface unit, the one or more policies. Each of the policy from the one or more policies is associated with a data. The method further comprises transmitting, by the user interface unit to the IPM, the one or more policies comprising the data. Further, the method comprises storing, by a storage unit, at the IPM, the one or more polices in a database. Furthermore, the method comprises forwarding, by the transceiver unit, from the IPM to the cron scheduler, a request to schedule the one or more policies based on the data.
  • the data associated with each of the policy from the one or more policies comprises one or more counters, one or more KPIs, one or more aggregation levels associated with each KPI from the one or more KPIs, a schedule associated with each counter from the one or more counters, a schedule associated with each KPI from the one or more KPIs, one or more severity breach threshold values associated with each of the KPI from the one or more KPIs, one or more severity breach threshold values associated with each of the counter from the one or more counters, one or more notification templates and a user notification group information.
  • the schedule associated with each counter from the one or more counters and the schedule associated with each KPI from the one or more KPIs comprises a time interval type and a time interval size.
  • the one or more severity breach threshold values associated with each of the KPI from the one or more KPIs and the one or more severity breach threshold values associated with each of the counter from the one or more counters is associated with one or more severities.
  • the set of breach conditions associated with the set of counters and the set of KPIs is identified in an event a current value of each of the counter from the set of counters and each of the KPI from the set of KPIs exceeds a corresponding severity breach threshold from the set of severity breach thresholds.
  • the method further comprises sending, by the IPM, the set of breach conditions to a learning module.
  • the method comprises calibrating, by a calibration unit, at the learning module, the severity breach thresholds associated with the set of breach conditions. The calibration is based on a set of factors comprising at least one of a weather, a holiday and a disaster.
  • the method comprises modifying, by the calibration unit, the severity breach thresholds for the set of policies.
  • the method further comprises storing, by the storage unit, by the learning module, the modified severity breach thresholds for the set of policies in the database.
  • the method comprises running, by an execution unit, at the cron scheduler, a cron for the set of KPIs and the set of counters.
  • the severity breach thresholds are fetched from the database.
  • the method further comprises triggering, by an alert unit, one or more alarms based on the set of breach conditions.
  • the one or more reports sent to the one or more users comprises a delta KPI report, wherein the delta relates to the difference in result between the previously sent reports and the generated one or more reports.
  • the system comprises a transceiver unit.
  • the transceiver unit is configured to transmit, from a cron scheduler, a request for execution of one or more policies at a pre-defined interval to an integrated performance management (IPM).
  • IPM integrated performance management
  • the transceiver unit is further configured to receive, at the IPM, a request for a report comprising a set of counters and a set of KPIs.
  • the system further comprises an identification unit.
  • the identification unit is configured to identify at the IPM, a set of policies from the one or more policies comprising the set of counters and the set of KPIs.
  • the system comprises an evaluation unit.
  • the evaluation unit is configured to evaluate at the IPM, the set of policies comprising the set of counters and the set of KPIs based on a set of severity breach thresholds.
  • the identification unit is configured to identify at the IPM, a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on a set of severity breach thresholds.
  • the system further comprises a report generation unit.
  • the report generation unit is configured to generate at the IPM, one or more reports comprising the set of breach conditions.
  • the breach conditions are calibrated based on the severity breach thresholds.
  • the transceiver unit is configured to send from the IPM, the one or more reports to one or more users based on the set of policies.
  • the UE comprises a user interface unit.
  • the user interface unit is configured to create, one or more policies comprising a set of counters and a set of KPIs.
  • the UE comprises a transceiver unit to send a request to a load balancer to save the one or more policies.
  • the transceiver unit is further configured to send a request, for fetching a result for the set of counters and the set of KPIs.
  • the transceiver unit is further configured to receive, a report comprising the result for the set of counters and the set of KPIs.
  • the result comprises one or more highlights for one or more breach conditions.
  • the result is generated by a system comprising a transceiver unit, configured to transmit, from a cron scheduler, a request for execution of the one or more policies at a pre-defined interval to an integrated performance management (IPM).
  • the transceiver unit is configured to receive, at the IPM, a request for the report comprising the set of counters and the set of KPIs.
  • the system comprises an identification unit, configured to identify at the IPM, a set of policies from the one or more policies comprising the set of counters and the set of KPIs.
  • the system comprises an evaluation unit, configured to evaluate at the IPM, the set of policies comprising the set of counters and the set of KPIs based on a set of severity breach thresholds.
  • the system further comprises the identification unit, configured to identify at the IPM, a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on a set of severity breach thresholds.
  • the system further comprises a report generation unit, configured to generate at the IPM, one or more reports comprising the set of breach conditions, wherein the breach conditions are calibrated based on the severity breach thresholds.
  • the transceiver unit of the system is further configured to send from the IPM, the one or more reports to the user interface unit of the UE based on the set of policies.
  • Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing instructions for counters and key performance indicator (KPIs) policy management in a network
  • the instructions include executable code which, when executed by one or more units of a system cause a transceiver unit to transmit, from a cron scheduler, a request for execution of one or more policies at a pre-defined interval to an integrated performance management (IPM).
  • the instructions when executed by the system further cause the transceiver unit to receive, at the IPM, a request for a report comprising a set of counters and a set of KPIs.
  • the instructions when executed by the system further cause an identification unit to identify at the IPM, a set of policies from the one or more policies comprising the set of counters and the set of KPIs.
  • the instructions when executed by the system further cause an evaluation unit to evaluate at the IPM, the set of policies comprising the set of counters and the set of KPIs based on a set of severity breach thresholds.
  • the instructions when executed by the system further cause the identification unit to identify at the IPM, a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on a set of severity breach thresholds.
  • the instructions when executed by the system further cause a report generation unit to generate at the IPM, one or more reports comprising the set of breach conditions, wherein the breach conditions are calibrated based on the severity breach thresholds.
  • the instructions when executed by the system further cause the transceiver unit to send from the IPM, the one or more reports to one or more users based on the set of policies.
  • FIG. 1A illustrates an exemplary block diagram of a network performance management system.
  • FIG. IB illustrates an exemplary block diagram representation of a management and orchestration (MANO) architecture/ platform, in accordance with exemplary implementation of the present disclosure.
  • MANO management and orchestration
  • FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
  • FIG. 3 illustrates an exemplary block diagram of a system for counters and key performance indicator (KPIs) policy management in a network, in accordance with exemplary implementations of the present disclosure.
  • KPIs key performance indicator
  • FIG. 4 illustrates a method flow diagram for counters and key performance indicator (KPIs) policy management in a network, in accordance with exemplary implementations of the present disclosure.
  • KPIs key performance indicator
  • FIG. 5 illustrates an exemplary implementation of the system for counters and key performance indicator (KPIs) policy management in a network, in accordance with exemplary implementations of the present disclosure.
  • FIG. 6 illustrates an implementation of an exemplary signal flow diagram for creating a policy and starting cron scheduling for the selected KPI and policies, in accordance with exemplary implementations of the present disclosure.
  • KPIs key performance indicator
  • FIG. 7. illustrates an implementation of a signal flow diagram for counters and key performance indicator (KPIs) policy management in a network, in accordance with exemplary implementations of the present disclosure.
  • KPIs key performance indicator
  • FIG. 8 illustrates an implementation of an exemplary signal flow diagram for showing a highlighted result to the user based on the user request for delta and KPI data, in accordance with exemplary implementations of the present disclosure is shown.
  • exemplary and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples.
  • any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art.
  • a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions.
  • a processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc.
  • the processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
  • a user equipment may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure.
  • the user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure.
  • the user device may contain at least one input means configured to receive an input from at least one of a transceiver unit, a processing unit, a storage unit, a detection unit and any other such unit(s) which are required to implement the features of the present disclosure.
  • storage unit or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine.
  • a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media.
  • the storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
  • interface refers to a shared boundary across which two or more separate components of a system exchange information or data.
  • the interface may also be referred to a set of rules or protocols that define communication or interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called.
  • All modules, units, components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
  • DSP digital signal processor
  • ASIC Application Specific Integrated Circuits
  • FPGA Field Programmable Gate Array circuits
  • the transceiver unit include at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information or a combination thereof between units/components within the system and/or connected with the system.
  • the present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing method and system of counters and key performance indicator (KPIs) policy management in a network.
  • KPIs key performance indicator
  • the network performance management system [100A] comprises various sub-systems such as: an integrated performance management system [100a], a normalization layer [100b], a computation layer [ 1 OOd], an anomaly detection layer [lOOo], a streaming engine [1001], a load balancer [100k], an operations and management system [lOOp], an API gateway system [lOOr], an analysis engine [lOOh], a parallel computing framework [lOOi], a forecasting engine [lOOt], a distributed file system [lOOj], a mapping layer [100s], a distributed data lake [lOOu], a scheduling layer [100g], a reporting engine [100m], a message broker [lOOe], a graph layer [ 1 OOf], a caching layer [100c], a service quality manager [lOOq]
  • IPM Integrated Performance Management
  • KPI Key Performance Indicator
  • the 5G Performance Management engine [lOOv] is a crucial component of the IPM system [100a], responsible for collecting, processing, and managing performance counter data from various data sources within the network.
  • the counter data includes metrics such as connection speed, latency, data transfer rates, and many others.
  • the counter data is then processed and aggregated as required, forming a comprehensive overview of network performance.
  • the processed information is then stored in the Distributed Data Lake [100u],
  • the Distributed data lake [lOOu] is a centralized, scalable, and flexible storage solution, allowing for easy access and further analysis.
  • the 5G Performance Management engine [lOOv] also enables the reporting and visualization of the performance counter data, thus providing network administrators with a real-time, insightful view of the network's operation.
  • An operator in the IPM system [100a] may be an individual, a device, an administrator, and the like who may interact with or manage the network.
  • the 5G Key Performance Indicator (KPI) Engine [100w] The 5G Key Performance Indicator (KPI) Engine [lOOw] is a dedicated component tasked with managing the KPIs of all the network elements.
  • the 5G Key Performance Indicator (KPI) Engine [lOOw] uses the performance counters, which are collected and processed by the 5G Performance Management engine [lOOv] from various data sources. These counters, encapsulating crucial performance data, are harnessed by the KPI engine [lOOw] to calculate essential KPIs.
  • These KPIs may include at least one of: data throughput, latency, packet loss rate, and more.
  • the KPIs are segregated based on the aggregation requirements, offering a multilayered and detailed understanding of the network performance.
  • the processed KPI data is then stored in the Distributed Data Lake [lOOu], ensuring a highly accessible, centralized, and scalable data repository for further analysis and utilization.
  • the 5G KPI engine [lOOw] is also responsible for reporting and visualization of KPI data. This functionality allows network administrators to gain a comprehensive, visual understanding of the network's performance, thus supporting informed decision-making and efficient network management.
  • the Ingestion layer (not shown in FIG. 1A) forms a key part of the IPM system [100a], The ingestion layer primarily performs the function to establish an environment capable of handling diverse types of incoming data. This data may include Alarms, Counters, Configuration parameters, Call Detail Records (CDRs), Infrastructure metrics, Logs, and Inventory data, all of which are crucial for maintaining and optimizing the network's performance. Upon receiving this data, the Ingestion layer processes the data by validating the data integrity and correctness to ensure that the data is fit for further use.
  • CDRs Call Detail Records
  • the data is routed to various components of the IPM system [100a], including the Normalization layer [100b], Streaming Engine [1001], Streaming Analytics, and Message Brokers [100e],
  • the destination is chosen based on where the data is required for further analytics and processing.
  • the Ingestion layer plays a vital role in managing the data flow within the system, thus supporting comprehensive and accurate network performance analysis.
  • the Normalization Layer [100b] serves to standardize, enrich, and store data into the appropriate databases. It takes in data that has been ingested and adjusts it to a common standard, making it easier to compare and analyse. This process of "normalization” reduces redundancy and improves data integrity. Upon completion of normalization, the data is stored in various databases like the Distributed Data Lake [lOOu], Caching Layer [100c], and Graph Layer [1 OOf], depending on its intended use. The choice of storage determines how the data can be accessed and used in the future.
  • the Normalization Layer [100b] produces data for the Message Broker [lOOe], a system that enables communication between different parts of the integrated performance management system [100a] through the exchange of data messages.
  • the Normalization Layer [100b] supplies the standardized data to several other subsystems. These include the Analysis Engine [lOOh] for detailed data examination, the Correlation Engine [lOOn] for detecting relationships among various data elements, the Service Quality Manager [lOOq] for maintaining and improving the quality of services, and the Streaming Engine [1001] for processing real-time data streams. These subsystems depend on the normalized data to perform their operations effectively and accurately, demonstrating the Normalization Layer's [100b] critical role in the entire system.
  • the Caching Layer [100c] in the IPM system [100a] plays a significant role in data management and optimization.
  • the Normalization Layer [100b] processes incoming raw data to create a standardized format, enhancing consistency and comparability.
  • the Normalizer Layer then inserts this normalized data into various databases.
  • One such database is the Caching Layer [100c]
  • the Caching Layer [100c] is a highspeed data storage layer which temporarily holds data that is likely to be reused, to improve speed and performance of data retrieval. By storing frequently accessed data in the Caching Layer [100c], the system significantly reduces the time taken to access this data, improving overall system efficiency and performance.
  • the Caching Layer [100c] serves as an intermediate layer between the data sources and the sub-systems, such as the Analysis Engine, Correlation Engine [lOOn], Service Quality Manager, and Streaming Engine.
  • the Normalization Layer [100b] is responsible for providing these sub-systems with the necessary data from the Caching Layer [100c],
  • the Computation Layer [lOOd] in the IPM system [100a] serves as the main hub for complex data processing tasks.
  • raw data is gathered, normalized, and enriched by the Normalization Layer [100b]
  • the Normalizer Layer [100b] then inserts this standardized data into multiple databases including the Distributed Data Lake [lOOu], Caching Layer [100c], and Graph Layer [ 1 OOf], and also feeds it to the Message Broker [100e],
  • several powerful sub-systems such as the Analysis Engine [ 1 OOh], Correlation Engine [ 1 OOn], Service Quality Manager [ 1 OOq], and the Streaming Engine [1001], utilize the normalized data.
  • the Analysis Engine [lOOh] performs in-depth data analytics to generate insights from the data.
  • the Correlation Engine [lOOn] identifies and understands the relations and patterns within the data.
  • the Service Quality Manager [lOOq] assesses and ensures the quality of the services.
  • the Streaming Engine [1001] processes and analyses the real-time data feeds.
  • the Computation Layer [lOOd] is where all major computation and data processing tasks occur. It uses the normalized data provided by the Normalization Layer [100b], processing it to generate useful insights, ensure service quality, understand data patterns, and facilitate real-time data analytics.
  • the Message broker [100e] an integral part of the IPM system [100a], operates as a publish-subscribe messaging system. It orchestrates and maintains the real-time flow of data from various sources and applications. At its core, the Message Broker [lOOe] facilitates communication between data producers and consumers through messagebased topics. This creates an advanced platform for contemporary distributed applications. With the ability to accommodate a large number of permanent or ad-hoc consumers, the Message Broker [lOOe] demonstrates immense flexibility in managing data streams. Moreover, it leverages the filesystem for storage and caching, boosting its speed and efficiency. The design of the Message Broker [lOOe] is centred around reliability. It is engineered to be fault-tolerant and mitigate data loss, ensuring the integrity and consistency of the data. With its robust design and capabilities, the Message Broker [lOOe] forms a critical component in managing and delivering real-time data in the system.
  • the Graph Layer [lOOf] plays a pivotal role in the IPM system [100a], It can model a variety of data types, including alarm, counter, configuration, CDR data, Inframetric data, 5G Probe Data, and Inventory data. Equipped with the capability to establish relationships among diverse types of data, The Graph Layer [lOOf] acts as a Relationship Modeler that offers extensive modelling capabilities. For instance, it can model Alarm and Counter data, probe and Alarm data, elucidating their interrelationships.
  • the Relationship Modeler should adapt at processing steps provided in the model and delivering the results to the system requested, whether it be a Parallel Computing system, Workflow Engine, Query Engine, Correlation Engine [lOOn], 5G Performance Management Engine, or 5G KPI Engine [100w], With its powerful modelling and processing capabilities, the Graph Layer [lOOf] forms an essential part of the system, enabling the processing and analysis of complex relationships between various types of network data.
  • Scheduling layer [100g] serves as a key element of the IPM System [100a], endowed with the ability to execute tasks at predetermined intervals set according to user preferences.
  • a task might be an activity performing a service call, an API call to another microservice, the execution of an Elastic Search query, and storing its output in the Distributed Data Lake [lOOu] or Distributed File System or sending it to another microservice.
  • the micro-service refers to a single system architecture to provide multiple functions. Some of the microservices in communication are API calls and remote procedure calls.
  • the versatility of the Scheduling Layer [100g] extends to facilitating graph traversals via the Mapping Layer to execute tasks.
  • the Analysis Engine [lOOh] forms a crucial part of the IPM System [100a], designed to provide an environment where users can configure and execute workflows for a wide array of use-cases. This facility aids in the debugging process and facilitates a better understanding of call flows.
  • users can perform queries on data sourced from various subsystems or external gateways. This capability allows for an in- depth overview of data and aids in pinpointing issues.
  • the system's flexibility allows users to configure specific policies aimed at identifying anomalies within the data. When these policies detect abnormal behaviour or policy breaches, the system sends notifications, ensuring swift and responsive action.
  • the Analysis Engine [lOOh] provides a robust analytical environment for systematic data interrogation, facilitating efficient problem identification and resolution, thereby contributing significantly to the system's overall performance management.
  • the Parallel Computing Framework [lOOi] is a key aspect of the Integrated Performance Management System [100a], providing a user-friendly yet advanced platform for executing computing tasks in parallel.
  • the parallel computing framework [lOOi] showcases both scalability and fault tolerance, crucial for managing vast amounts of data. Users can input data via Distributed File System (DFS) [lOOj] locations or Distributed Data Lake (DDL) indices.
  • the framework supports the creation of task chains by interfacing with the Service Configuration Management (SCM) Sub-System. Each task in a workflow is executed sequentially, but multiple chains can be executed simultaneously, optimizing processing time. To accommodate varying task requirements, the service supports the allocation of specific host lists for different computing tasks.
  • the Parallel Computing Framework [lOOi] is an essential tool for enhancing processing speeds and efficiently managing computing resources, significantly improving the system's performance management capabilities.
  • the Distributed File System (DFS) [lOOj] is a critical component of the Integrated Performance Management System [100a], enabling multiple clients to access and interact with data seamlessly.
  • the Distributed File system [lOOj] is designed to manage data files that are partitioned into numerous segments known as chunks.
  • the DFS [ 1 OOj ] effectively allows for the distribution of data across multiple nodes. This architecture enhances both the scalability and redundancy of the system, ensuring optimal performance even with large data sets.
  • DFS [lOOj] also supports diverse operations, facilitating the flexible interaction with and manipulation of data. This accessibility is paramount for a system that requires constant data input and output, as is the case in a robust performance management system.
  • the Load Balancer (LB) [100k] is a vital component of the Integrated Performance Management System [100a], designed to efficiently distribute incoming network traffic across a multitude of backend servers or microservices. Its purpose is to ensure the even distribution of data requests, leading to optimized server resource utilization, reduced latency, and improved overall system performance.
  • the LB [100k] implements various routing strategies to manage traffic.
  • the LB [100k] includes round-robin scheduling, header-based request dispatch, and context-based request dispatch. Round-robin scheduling is a simple method of rotating requests evenly across available servers. In contrast, header and contextbased dispatching allow for more intelligent, request-specific routing.
  • Header-based dispatching routes requests based on data contained within the headers of the Hypertext Transfer Protocol (HTTP) requests.
  • Context-based dispatching routes traffic based on the contextual information about the incoming requests.
  • the LB [100k] manages event and event acknowledgments, forwarding requests or responses to the specific microservice that has requested the event. This system ensures efficient, reliable, and prompt handling of requests, contributing to the robustness and resilience of the overall performance management system.
  • Streaming Engine [1001] The Streaming Engine [1001], also referred to as Stream Analytics, is a critical subsystem in the Integrated Performance Management System [100a], This engine is specifically designed for high-speed data pipelining to the User Interface (UI).
  • UI User Interface
  • Streaming Engine [1001] After processing, the data is streamed to the UI, fostering rapid decision-making and responses.
  • the Streaming Engine [1001] cooperates with the Distributed Data Lake [lOOu], Message Broker [ 1 OOe], and Caching Layer [100c] to provide seamless, real-time data flow.
  • Stream Analytics is designed to perform required computations on incoming data instantly, ensuring that the most relevant and up-to- date information is always available at the UI.
  • this system can also retrieve data from the Distributed Data Lake [lOOu], Message Broker [ 1 OOe], and Caching Layer [100c] as per the requirement and deliver it to the UI in real-time.
  • the streaming engine's [1001] is configured to provide fast, reliable, and efficient data streaming, contributing to the overall performance of the Integrated Performance Management System [100a],
  • the Reporting Engine [100m] is a key subsystem of the Integrated Performance Management System [100a], The fundamental purpose of designing the Reporting Engine [100m] is to dynamically create report layouts of API data, catered to individual client requirements, and deliver these reports via the Notification Engine.
  • the REM serves as the primary interface for creating custom reports based on the data visualized through the client's dashboard. These custom dashboards, created by the client through the User Interface (UI), provide the basis for the Reporting Engine [100m] to process and compile data from various interfaces.
  • the main output of the Reporting Engine [100m] is a detailed report generated in Excel format.
  • the Reporting Engine s [100m] unique capability to parse data from different subsystem interfaces, process it according to the client's specifications and requirements, and generate a comprehensive report makes it an essential component of this performance management system. Furthermore, the Reporting Engine [100m] integrates seamlessly with the Notification Engine to ensure timely and efficient delivery of reports to clients via email, ensuring the information is readily accessible and usable, thereby improving overall client satisfaction and system usability.
  • the Correlation Engine [100n] provides provisioning support.
  • a correlation model can be provisioned from UI and associated with single/multiple trigger points to run a particular correlation. It can be triggered automatically as soon as triggers are received from different components in the platform across alarm, counter, KPI, CDR, and metric data against a provisioned source trigger point.
  • the correlation engine [lOOn] also provides hypothesis validation support for an on-demand execution feature for different types of correlation, providing an output that can be visualized on UI or exported from the UI.
  • the correlation engine [lOOn] may use learning models and machine learning algorithms to correlate the alarms with the raw data or clear codes or infrastructure events received from other systems.
  • the correlation engine constantly monitors and compares the collected data with the baseline behaviour to detect any deviations. On any violation, the pre-defined remediation action is triggered in order to maintain network consistency
  • the Anomaly Detection Layer [lOOo] is another key subsystem of the IPM system [100a], The fundamental purpose of the Anomaly detection layer [lOOo] is to identify and detect anomalies.
  • the anomaly detection layer [lOOo] may drill down to the level of the server and precisely identify the problematic elements in the network.
  • FIG. IB illustrates an exemplary block diagram representation of a management and orchestration (MANO) architecture/ platform [100B], in accordance with exemplary implementation of the present disclosure.
  • the MANO architecture [100B] is developed for managing telecom cloud infrastructure automatically, managing design or deployment design, managing instantiation of network node(s)/ service(s) etc.
  • the MANO architecture [100B] deploys the network node(s) in the form of Virtual Network Function (VNF) and Cloud-native/ Container Network Function (CNF).
  • VNF Virtual Network Function
  • CNF Cloud-native/ Container Network Function
  • the system may comprise one or more components of the MANO architecture [ 100B] .
  • the MANO architecture [100B] is used to auto-instantiate the VNFs into the corresponding environment of the present disclosure so that it could help in onboarding other vendor(s) CNFs and VNFs to the platform.
  • the MANO architecture [100B] comprises a user interface layer, a network function virtualization (NFV) and software defined network (SDN) design function module [104], a platforms foundation services module [106], a platform core services module [108] and a platform resource adapters and utilities module [112], All the components are assumed to be connected to each other in a manner as obvious to the person skilled in the art for implementing features of the present disclosure.
  • NFV network function virtualization
  • SDN software defined network
  • the NFV and SDN design function module [104] comprises a VNF lifecycle manager (compute) [1042], a VNF catalog [1044], a network services catalog [1046], a network slicing and service chaining manager [1048], a physical and virtual resource manager [1050] and a CNF lifecycle manager [1052],
  • the VNF lifecycle manager (compute) [1042] is responsible for deciding on which server of the communication network, the microservice will be instantiated.
  • the VNF lifecycle manager (compute) [1042] may manage the overall flow of incoming/ outgoing requests during interaction with the user.
  • the VNF lifecycle manager (compute) [1042] is responsible for determining which sequence to be followed for executing the process.
  • the VNF catalog [1044] stores the metadata of all the VNFs (also CNFs in some cases).
  • the network services catalog [1046] stores the information of the services that need to be run.
  • the network slicing and service chaining manager [1048] manages the slicing (an ordered and connected sequence of network service/ network functions (NFs)) that must be applied to a specific networked data packet.
  • the physical and virtual resource manager [1050] stores the logical and physical inventory of the VNFs. Just like the VNF lifecycle manager (compute) [1042], the CNF lifecycle manager [1052] is used for the CNFs lifecycle management.
  • the platforms foundation services module [106] comprises a microservices elastic load balancer [1062], an identity & access manager [1064], a command line interface (CLI) [1066], a central logging manager [1068], and an event routing manager [1070],
  • the microservices elastic load balancer [1062] is used for maintaining the load balancing of the request for the services.
  • the identity & access manager [1064] is used for logging purposes.
  • the command line interface (CLI) [1066] is used to provide commands to execute certain processes which require changes during the run time.
  • the central logging manager [1068] is responsible for keeping the logs of every service. These logs are generated by the MANO platform [100B], These logs are used for debugging purposes.
  • the event routing manager [1070] is responsible for routing the events i.e., the application programming interface (API) hits to the corresponding services.
  • API application programming interface
  • the platforms core services module [108] comprises NFV infrastructure monitoring manager [1082], an assure manager [1084], a performance manager [1086], a policy execution engine [1088], a capacity monitoring manager [1090], a release management (mgmt.) repository [1092], a configuration manager & GCT [1094], an NFV platform decision analytics [1096], a platform NoSQL DB [1098]; a platform schedulers and cron jobs [1100], a VNF backup & upgrade manager [1102], a micro service auditor (MAUD) [1104], and a platform operations, administration and maintenance manager [1106],
  • the NFV infrastructure monitoring manager [1082] monitors the infrastructure part of the NFs.
  • the assure manager [1084] is responsible for supervising the alarms the vendor is generating.
  • the performance manager [1086] is responsible for managing the performance counters.
  • the policy execution engine (PEGN) [1088] is responsible for all the managing the policies.
  • the capacity monitoring manager (CMM) [1090] is responsible for sending the request to the PEGN [1088],
  • the release management (mgmt.) repository (RMR) [1092] is responsible for managing the releases and the images of all the vendor network node.
  • the configuration manager & (GCT) [1094] manages the configuration and GCT of all the vendors.
  • the NFV platform decision analytics (NPDA) [1096] helps in deciding the priority of using the network resources.
  • the policy execution engine (PEGN) [1088], the configuration manager & GCT [1094] and the NPDA [1096] work together.
  • the platform NoSQL DB [1098] is a database for storing all the inventory (both physical and logical) as well as the metadata of the VNFs and CNF.
  • the platform schedulers and cron jobs [1100] schedules the task such as but not limited to triggering of an event, traverse the network graph etc.
  • the VNF backup & upgrade manager [1102] takes backup of the images, binaries of the VNFs and the CNFs and produces those backups on demand in case of server failure.
  • the micro service auditor [1104] audits the microservices.
  • the micro service auditor [1104] audits and informs the same so that resources can be released for services running in the MANO architecture [100B], thereby assuring the services only run on the MANO platform [100B],
  • the platform operations, administration and maintenance manager [1106] is used for newer instances that are spawning.
  • the platform resource adapters and utilities module [112] further comprises a platform external API adaptor and gateway [1122]; a generic decoder and indexer (XML, CSV, JSON) [1124]; a service adaptor [1126]; an API adapter [1128]; and a NFV gateway [1130],
  • the platform external API adaptor and gateway [1122] is responsible for handling the external services (to the MANO platform [100B]) that requires the network resources.
  • the generic decoder and indexer (XML, CSV, JSON) [1124] gets directly the data of the vendor system in the XML, CSV, JSON format.
  • the service adaptor [1126] is the interface provided between the telecom cloud and the MANO architecture [100B] for communication.
  • the API adapter [1128] is used to connect with the virtual machines (VMs).
  • the NFV gateway [1130] is responsible for providing the path to each service going to/incoming from the MANO architecture [100B],
  • FIG. 2 illustrates an exemplary block diagram of a computing device [200] upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
  • the computing device [200] may also implement a method for counters and key performance indicator (KPIs) policy management in a network, utilising the system.
  • the computing device [200] itself implements the method for counters and key performance indicator (KPIs) policy management in a network using one or more units configured within the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
  • KPIs key performance indicator
  • the computing device [200] may include a bus [202] or other communication mechanism for communicating information, and a hardware processor [204] coupled with bus [202] for processing information.
  • the hardware processor [204] may be, for example, a general-purpose microprocessor.
  • the computing device [200] may also include a main memory [206], such as a random access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204],
  • the main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [204], Such instructions, when stored in non-transitory storage media accessible to the processor [204], render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions.
  • the computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204],
  • ROM read only memory
  • a storage device [210] such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and instructions.
  • the computing device [200] may be coupled via the bus [202] to a display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user.
  • An input device [214] including alphanumeric and other keys, touch screen input means, etc.
  • a cursor controller [216] such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [204], and for controlling cursor movement on the display [212].
  • This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
  • the computing device [200] also may include a communication interface [218] coupled to the bus [202], The communication interface [218] provides a two-way data communication coupling to a network link [220] that is connected to a local network [222],
  • the communication interface [218] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line.
  • the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
  • LAN local area network
  • Wireless links may also be implemented.
  • the communication interface [218] sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
  • the computing device [200] can send messages and receive data, including program code, through the network(s), the network link [220] and the communication interface [218],
  • a server [230] might transmit a requested code for an application program through the Internet [228], the ISP [226], the local network [222], the host [224] and the communication interface [218],
  • the received code may be executed by the processor [204] as it is received, and/or stored in the storage device [210], or other non-volatile storage for later execution.
  • the present disclosure is implemented by a system [300] (as shown in FIG. 3).
  • the system [300] may include the computing device [200] (as shown in FIG. 2). It is further noted that the computing device [200] is able to perform the steps of a method [400] (as shown in FIG. 4).
  • FIG. 3 an exemplary block diagram of a system [300] for providing counters and key performance indicators (KPIs) policy management in a network is shown, in accordance with the exemplary implementations of the present disclosure.
  • KPIs key performance indicators
  • the system [300] comprises at least one transceiver unit [302], and at least one execution unit [322] in at least one cron scheduler [304],
  • the system [300] further comprises at least one transceiver unit [306], at least one identification unit [308], at least one evaluation unit [310], at least one report generation unit [312], at least one storage unit [314] and at least one alarm unit [324] in at least one IPM [100a]
  • the system further comprises at least one calibration unit [318] in at least one learning module [320]
  • all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the FIG. 3, all units shown within the system should also be assumed to be connected to each other.
  • system [300] may comprise multiple such units or the system [300] may comprise any such numbers of said units, as required to implement the features of the present disclosure.
  • the system [300] may be present in a user device to implement the features of the present disclosure.
  • the system [300] may be a part of the user device / or may be independent of but in communication with the user device (may also be referred herein as a UE).
  • the system [300] may reside in a server or a network entity.
  • the system [300] may reside partly in the server/ network entity and partly in the user device.
  • the system [300] is configured for counters and key performance indicators (KPIs) policy management in a network, with the help of the interconnection between the components/units of the system [300],
  • KPIs key performance indicators
  • the user interface unit [316] at the UE Prior to transmitting a request for execution of one or more policies from the cron scheduler [304] to the IPM [100a], the user interface unit [316] at the UE is configured to create the one or more policies. Each of the policy from the one or more policies is associated with a data.
  • the data associated with each of the policy from the one or more policies includes but may not be limited to one or more counters, one or more KPIs, one or more aggregation levels associated with each KPI from the one or more KPIs, a schedule associated with each counter from the one or more counters, a schedule associated with each KPI from the one or more KPIs, one or more severity breach threshold values associated with each of the KPI from the one or more KPIs, one or more severity breach threshold values associated with each of the counter from the one or more counters and an email group to receive a KPI report.
  • the policies can be created and scheduled for each KPI individually for regular observation.
  • the one or more counters refers to raw metrics which is collected from various network entities to detect a specific event.
  • the one or more KPIs are created from the one or more counters.
  • a KPI can be created to assess a success rate of request delivery, based on the counters that collect metrics related to number of requests delivered and number of requests failed to be delivered.
  • the aggregation levels associated with the one or more KPIs refers to a network geographical area such circle, blade, instance, cluster, etc.
  • the aggregation levels are defined by users. This determines at what granular level the user wants to analyse the KPIs.
  • the user may be a system operator, a network operator, and the like.
  • the schedule is defined as time period to measure the counters and KPIs.
  • the schedule associated with each counter from the one or more counters and the schedule associated with each KPI from the one or more KPIs includes but may not be limited to a time interval type and a time interval size. Moreover, for a single KPI, multiple users can schedule their policies at different aggregation levels. One can include any number of counters from a network node in a policy and schedule it at any level. Further, the one or more notification templates may refer to specific formats in which the user wishes to receive the reports.
  • the user notification group information may comprise an email group to which the reports need to be delivered. The notification group information is not limited to emails, but may also comprise phone numbers, IP address, etc. of the users. Users can choose the email group to which the generated KPI report needs to be sent.
  • the one or more severity breach threshold refers to a predefined limit which defines a value above or below which a breach condition occurs.
  • the values as defined for each KPI are referred to as severity breach thresholds.
  • the severity breach thresholds may be defined as follows:
  • the user interface unit [316] is further configured to transmit to the IPM [100a], the one or more policies comprising the data. Further, the storage unit [314] at the IPM [100a] is configured to store the one or more polices in a database.
  • the database is the distributed data lake (DDL) [lOOu] as depicted in FIG. 1A.
  • the transceiver unit [302] at the cron scheduler [304] is configured to transmit a request for execution of one or more policies at a pre-defined interval to the IPM [100a],
  • the pre-defined interval is a periodical time period which defines when the policies should be executed and may be defined by the user.
  • the pre-defined interval may be 1 hour, where the counter data may be requested by the user.
  • the transceiver unit [306] at the IPM [100a] receives a request for a report.
  • the request includes but may not be limited to a set of counters and a set of KPIs for which a policy is to be executed.
  • the execution unit [322] Post receiving the request for the report comprising the set of counters and the set of KPIs, the execution unit [322] is configured to run at the cron scheduler [304], a cron for the policy comprising the set of KPIs and the set of counters.
  • the cron refers to a time-based task scheduler. The cron allows the users to schedule tasks at pre-defined intervals of time. The cron may periodically execute the policy or policies needed to analyse the set of counters and the set of KPIs to generate the report.
  • the identification unit [308] is configured to identify a set of policies from the one or more policies which are defined for the received set of counters and the set of KPIs.
  • the identification refers to selecting a relevant set of policies from the one or more policies based on the set of counters and the set of KPIs.
  • the policies can be created and scheduled for each KPI individually for regular observation. Therefore, when the cron scheduler [304] sends a request to execute a policy based on the counters and KPIs as provided by the user, the IPM [100a] picks the policies that have been created for the counters and KPIs as requested by the user.
  • the evaluation unit [310] at the IPM [100a] is configured to evaluate the set of policies including the set of counters and the set of KPIs based on a set of severity breach thresholds.
  • the one or more severity breach threshold values associated with each of the KPI from the one or more KPIs and the one or more severity breach threshold values associated with each of the counter from the one or more counters is associated with one or more severities.
  • the breach conditions associated with the set of counters and the set of KPIs is identified in an event a current value of each of the counter from the set of counters and each of the KPI from the set of KPIs exceeds/falls below a corresponding severity breach threshold from the set of severity breach thresholds as defined in the policies.
  • the one or more severities may be warning, major and critical.
  • the identification unit [308] is configured to identify at the IPM [100a], a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on the set of severity breach thresholds.
  • the severity breach thresholds may be defined as follows:
  • the alert unit [324] is configured to trigger one or more alarms based on the identified set of breach conditions.
  • the one or more alarms may be one of a warning alarm, a major alarm and a critical alarm.
  • Each of the one or more alarms is associated with a severity level, indicating the seriousness of the breach.
  • the warning alarm might be for low severity
  • the critical alarm may be for high severity.
  • the values of KPIs and counters falling beyond the thresholds which result in threshold breaches, are highlighted according to the severity. These severity breaches then can be used for several purposes including but not limited to notifying the user, raising an alarm.
  • the report generation unit [312] is configured to generate at the IPM [100a], one or more reports comprising the set of breach conditions. For generating the one or more reports by the report generation unit [312], at the IPM [100A], the severity breach thresholds are fetched from the database.
  • the transceiver unit [306] is configured to send from the IPM [100a], the one or more reports to one or more users based on the set of policies.
  • the one or more reports sent to the one or more users includes but may not be limited to a delta KPI report. The delta relates to the difference in result between the previously sent reports and the generated one or more reports.
  • the system [300] also provides the delta for a user chosen dates, where the system [300] may utilize the stored pre-computed KPI data to perform the real-time calculation and output delivery. For example, if the user has already received and downloaded a report with details about 2 KPIs, then the IPM [100a], will not send the report comprising these 2 KPIs, but will only send the difference, that is the report for KPIs which the user has not received.
  • the IPM [100a] after generation of the report, interacts with a mail server to send the generated report to the one or more users.
  • the policies created by the user comprise an email group which should receive the report.
  • the IPM [100a] interacts with the mail server and sends the generated report to the email group configured in the policy.
  • the IPM [100a] sends the set of breach conditions identified to a learning module where the breach conditions are calibrated by the calibration unit [318] based on the severity breach thresholds.
  • the calibration is based on a set of factors including but may not be limited to a weather, a holiday and a disaster. As can be understood, these are external factors and therefore may change from time to time.
  • the calibration unit [318] is further configured to modify the severity breach thresholds for the set of policies. To calibrate, the calibration unit [318] may measure the counters and calculate the KPIs and based on the geographical conditions, time and other factors and accordingly calibrate the threshold values and accordingly modify the severity breach thresholds for the set of policies. In one example, during day, a success rate KPI threshold increases, while the success rate KPI threshold may decrease during night.
  • the storage unit [314] is configured to store the modified severity breach thresholds for the set of policies in the database.
  • FIG. 4 an exemplary method flow diagram [400] for counters and key performance indicator (KPIs) policy management in a network, in accordance with exemplary implementations of the present disclosure is shown.
  • the method [400] is performed by the system [300], Further, in an implementation, the system [300] may be present in a server device to implement the features of the present disclosure. Also, as shown in FIG.
  • the method [400] starts at step [402], [0083]
  • the method comprises transmitting, by a transceiver unit [302], from a cron scheduler [304], a request for execution of one or more policies at a pre-defined interval to an integrated performance management (IPM) [100a],
  • IPM integrated performance management
  • the pre-defined interval may be 1 hour, where the counter data may be requested by the user.
  • the user interface unit [316] prior to transmitting the request for execution of one or more policies from the cron scheduler [304] to the IPM [100a], creates the one or more policies. Each of the policy from the one or more policies is associated with a data.
  • the data associated with each of the policy from the one or more policies includes but may not be limited to one or more counters, one or more KPIs, one or more aggregation levels associated with each KPI from the one or more KPIs, a schedule associated with each counter from the one or more counters, a schedule associated with each KPI from the one or more KPIs, one or more severity breach threshold values associated with each of the KPI from the one or more KPIs, one or more severity breach threshold values associated with each of the counter from the one or more counters, one or more notification templates and a user notification group information.
  • the policies can be created and scheduled for each KPI individually for regular observation.
  • the one or more counters refers to raw metrics which is collected from various network entities to detect a specific event. For example, the number of times a request fails to be delivered or a number of times a response is not received.
  • the one or more KPIs are created from the one or more counters. For example, a KPI can be created to assess a success rate of request delivery, based on the counters that collect metrics related to number of requests delivered and number of requests failed to be delivered.
  • the aggregation levels associated with the one or more KPIs refers to a network geographical area such circle, blade, etc. The aggregation levels are defined by users. This determines at what granular level the user wants to analyse the KPIs.
  • the user may be a system operator, a network operator, and the like.
  • the schedule is defined as time period to measure the counters and KPIs.
  • the schedule associated with each counter from the one or more counters and the schedule associated with each KPI from the one or more KPIs includes but may not be limited to a time interval type and a time interval size.
  • multiple users can schedule their policies at different aggregation levels.
  • One can include any number of counters from a network node in a policy and schedule it at any level.
  • the one or more notification templates may refer to specific formats in which the user wishes to receive the reports.
  • the user notification group information may comprise an email group to which the reports need to be delivered.
  • the notification group information is not limited to emails, but may also comprise phone numbers, IP address, etc. of the users. Users can choose the email group to which the generated KPI report needs to be sent.
  • the one or more severity breach threshold refers to a predefined limit which defines a value above or below which a breach condition occurs.
  • the values as defined for each KPI are referred to as severity breach thresholds.
  • the severity breach thresholds may be defined as follows:
  • the storage unit [314] stores the one or more polices in a database.
  • the database is the distributed data lake (DDL) [lOOu] as depicted in FIG. 1 A.
  • the method comprises receiving, by the transceiver unit [306], at the IPM [100a], a request for a report comprising a set of counters and a set of KPIs.
  • the execution unit [322] is configured to run at the cron scheduler [304], a cron for the set of KPIs and the set of counters.
  • the cron refers to a time-based task scheduler. The cron allows the users to schedule tasks at predefined intervals of time. The cron may periodically execute the tasks needed to gather the set of counters and the set of KPIs to generate the report.
  • the method comprises identifying, by an identification unit [308], at the IPM [100a], a set of policies from the one or more policies comprising the set of counters and the set of KPIs.
  • the identification refers to selecting a relevant set of policies from the one or more policies based on the set of counters and the set of KPIs.
  • the policies can be created and scheduled for each KPI individually for regular observation. Therefore, when the cron scheduler [304] sends a request to execute a policy based on the counters and KPIs as provided by the user, the IPM [100a] picks the policies that have been created for the counters and KPIs as requested by the user.
  • the method comprises evaluating, by an evaluation unit [310], at the IPM [100a], the set of policies comprising the set of counters and the set of KPIs based on a set of severity breach thresholds.
  • the one or more severity breach threshold values associated with each of the KPI from the one or more KPIs and the one or more severity breach threshold values associated with each of the counter from the one or more counters is associated with one or more severities.
  • the set of breach conditions associated with the set of counters and the set of KPIs is identified in an event a current value of each of the counter from the set of counters and each of the KPI from the set of KPIs exceeds/falls below a corresponding severity breach threshold from the set of severity breach thresholds.
  • the one or more severities may be warning, major and critical.
  • the method comprises identifying, by the identification unit [308], at the IPM [100a], a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on the set of severity breach thresholds.
  • the severity breach thresholds may be defined as follows:
  • the method comprises generating, by a report generation unit [312], at the IPM [100a], one or more reports comprising the set of breach conditions.
  • the alert unit [324] triggers one or more alarms based on the set of breach conditions.
  • the one or more alarms may be one of a warning alarm, a major alarm and a critical alarm.
  • Each of the one or more alarms is associated with a severity level, indicating the seriousness of the breach.
  • the warning alarm might be for low severity
  • the critical alarm may be for high severity.
  • the values of KPIs and counters falling beyond the thresholds which result in threshold breaches, are highlighted according to the severity. These severity breaches then can be used for several purposes including but not limited to notifying the user, raising an alarm.
  • the method comprises sending, by the transceiver unit [306], from the IPM [100a], the one or more reports to one or more users based on the set of policies.
  • the one or more reports may be generated by the report generation unit [312].
  • the one or more reports sent to the one or more users includes but may not be limited to a delta KPI report.
  • the delta relates to the difference in result between the previously sent reports and the generated one or more reports.
  • the method [400] also provides the delta for a user chosen dates, where the method [400 may utilize the stored pre-computed KPI data to perform the real-time calculation and output delivery.
  • the IPM [100a] interacts with a mail server to send the generated report to the one or more users.
  • the IPM [100a] sends the set of breach conditions identified to a learning module where the breach conditions are calibrated by the calibration unit [318] based on the severity breach thresholds.
  • the calibration is based on a set of factors including but may not be limited to a weather, a holiday and a disaster. As can be understood, these are external factors and therefore may change from time to time.
  • the calibration unit [318] is further configured to modify the severity breach thresholds for the set of policies. To calibrate, the calibration unit [318] may measure the counters and calculate the KPIs and based on the geographical conditions, time and other factors and accordingly calibrate the threshold values and accordingly modify the severity breach thresholds for the set of policies. In one example, during day, a success rate KPI threshold increases, while the success rate KPI threshold may decrease during night.
  • the storage unit [314] is configured to store the modified severity breach thresholds for the set of policies in the database.
  • FIG.5 an exemplary implementation of the system [500] for counters and key performance indicator (KPIs) policy management in a network, in accordance with exemplary implementations of the present disclosure is shown.
  • KPIs key performance indicator
  • the implementation system [500] comprises the user interface (UI) unit [316] at a User Equipment, the load balancer [100k], the integrated performance management (IPM) [100a], the computational layer [ 1 OOd], the distributed data lake [lOOu], the distributed file system [ 1 OOj ], the cron scheduler [304], a mail server [502] and an artificial intelligence/machine learning layer [504],
  • UI user interface
  • IPM integrated performance management
  • the UI unit [316] may be one of a graphical user interface (GUI), a command line interface, and the like.
  • GUI graphical user interface
  • the GUI refers to an interface to interact with the system [500] by the user by visual or graphical representation of icons, menu, etc.
  • the GUI is an interface that may be used within a smartphone, laptop, computer, etc.
  • the CLI refers to a text-based interface to interact with the system [500] as by the user.
  • the user may input text lines called as command lines in the CLI to access the data in the system.
  • the user creates one or more policies related to counters and KPIs at the UI unit [316], Once the user has finished creating the policies, the user saves the one or more policies.
  • the request to save the one or policies is transmitted by the UI unit [316] to the load balancer [100k] to distribute the one or more policies to one or more instances of the IPM [100a],
  • the load balancer (LB) [100k] is a component of the IPM architecture [100A] to efficiently distribute incoming network traffic or requests.
  • the load balancer [100k] ensures even distribution of requests, leading to optimized server resource utilization, reduced latency, and improved overall system performance.
  • the LB [100k] implements various routing strategies to manage traffic.
  • the LB [100k] includes round-robin scheduling, header-based request dispatch, and context-based request dispatch as defined in FIG. 1 A.
  • the request to save the one or more policies is further transmitted by the load balancer [100k] to the IPM [100a],
  • the IPM [100a] is configured to collect, process, and manage performance counter data from data sources within the network.
  • the counter data includes metrics such as connection speed, latency, data transfer rates, and many others.
  • the IPM [100a] is further configured to collect the one or more policies and send it for storage to in the Distributed Data Lake [100u],
  • the Distributed data lake [lOOu] is the centralized, scalable, and flexible storage solution, allowing for easy access and further analysis.
  • the user can also request for one or more reports for a set of counters and KPIs which the user wants to analyse and observe.
  • the user sends the request from the UI unit [316] to the IPM [100a] via the load balancer [100k],
  • the IPM [100a] identifies a set of policies comprising the requested set of counters and KPIs.
  • the IPM [100a] evaluates the set of policies based on a set of severity breach thresholds.
  • the severity breach thresholds are defined in the set of policies for counters and KPIs. Based on the evaluation, the IPM [100a] identifies a set of breach conditions for the requested set of counters and KPIs.
  • the IPM [100a] generates one or more reports for the user comprising the set of breach conditions which are calibrated based on the severity breach thresholds.
  • the breach thresholds are highlighted based on the severities identifies by the breach conditions.
  • the severity breach thresholds may be defined as follows:
  • the report will highlight this in dark red color. Similarly, if the identified breach condition falls in the severity defined as “major”, then the report will highlight this in red color. And, if the identified breach condition falls in the severity defined as “warning”, then the report will highlight this in orange color.
  • the IPM [100a] identifies the mail server [502] for sending the reports to the user.
  • the mail server [502] is a system responsible for sending, receiving, and storing emails.
  • the mail server [502] ensures that emails are correctly routed to the users for the one or more policies.
  • the IPM [100a] sends the set of breach conditions identified to a learning module where the breach conditions are calibrated by the calibration unit [318] based on the severity breach thresholds.
  • the calibration is based on a set of factors including but may not be limited to a weather, a holiday and a disaster. As can be understood, these are external factors and therefore may change from time to time.
  • the calibration unit [318] is further configured to modify the severity breach thresholds for the set of policies. To calibrate, the calibration unit [318] may measure the counters and calculate the KPIs and based on the geographical conditions, time and other factors and accordingly calibrate the threshold values and accordingly modify the severity breach thresholds for the set of policies. In one example, during day, a success rate KPI threshold increases, while the success rate KPI threshold may decrease during night.
  • the storage unit [314] is configured to store the modified severity breach thresholds for the set of policies in the database.
  • the learning module is an Artificial Intelligence (AI)/Machine Learning (ML) layer [504] which calibrates the severity breach threshold for the one or more identified policies based on geographical conditions, time and other factors.
  • AI Artificial Intelligence
  • ML Machine Learning
  • the cron scheduler [304] runs a cron for the set of counters and KPIs as requested by the user.
  • the cron refers to a time-based task scheduler.
  • the cron scheduler [304] allows the users to schedule tasks at pre-defined intervals of time.
  • the cron may periodically execute the tasks needed to gather the values for the set of counters and KPIs to generate the report.
  • the cron information and its state of execution, like in progress or terminated, is stored in the DDL [100u],
  • the Computation Layer [lOOd] serves as the main hub for complex data processing tasks. In essence, the Computation Layer [lOOd] is where all major computation and data processing tasks occur.
  • the Distributed File System (DFS) [lOOj] is a critical component of the Integrated Performance Management System [100a] that enables multiple clients to access and interact with data seamlessly.
  • the Distributed File system [lOOj] is designed to manage data files that are partitioned into numerous segments known as chunks.
  • FIG. 6 an exemplary implementation of a signal flow diagram [600] for creating policies, in accordance with exemplary implementations of the present disclosure is shown.
  • a user [602] creates the one or more policies at the UI unit [316], In one example, after creation of the one or more policies, the user [602] may select to save the one or more policies.
  • the UI unit [316] sends a request via the load balancer [100k] to save the one or more policies at the IPM [100a], [0114]
  • the load balancer [100k] sends the one or more policies to the IPM [100a] for saving.
  • the IPM [100a] saves the data of the one or more policies at the distributed data lake [100u],
  • the IPM [100a] forwards the request to the cron scheduler [304] for running a cron for the one or more policies.
  • Step 6 the state of the cron scheduler [304] is stored at the distributed data lake [100u],
  • the cron scheduler [304] sends an acknowledgment for starting the cron scheduling for the one or more policies to the IPM [100a],
  • the IPM [100a] sends a confirmation of the scheduling of the one or more policies to the UI unit [316],
  • the UI unit [316] displays an update of the one or more polices being saved successfully, to the user.
  • FIG. 7 an exemplary implementation of a signal flow diagram [700] for counters and KPIs policy management, in accordance with exemplary implementations of the present disclosure is shown.
  • the user creates one or more policies and saves them at the DDL [100u], Later, the user sends a request for one or more reports comprising a set of counters and KPIs which the user wants to analyse and observe.
  • a request for execution of one or more policies at a pre-defined interval is received at the IPM [100a] from the cron scheduler [304], As shown in FIG. 7, at Step 1, the cron scheduler [304] sends the request for execution of one or more policies comprising the requested set of counters and KPIs to the IPM [100a],
  • the request is transmitted to the computation layer [lOOd] for processing if the request is received before the retention period expires for the computation layer [ 1 OOd] .
  • the IPM [100a] is configured to receive an acknowledgement from the computational layer [100d]
  • the retention period refers to the maximum duration of time for which the computation layer [lOOu] stores the data in its cache. In one example, the retention period may be defined as 10 days. Therefore, the present disclosure utilizes the stored pre-computed KPI data to perform the realtime calculation and output delivery.
  • the computation layer [lOOd] sends a request to the distributed file system [lOOj] to access the stored data to execute the one or more policies.
  • the distributed file system [lOOj] sends the data based on the request.
  • the computation layer [lOOd] performs computations on the data.
  • the computation may identify a set of breach conditions based on the defined set of severity breach thresholds in the one or more policies.
  • the severity breach thresholds are defined for the set of counters and the KPIs.
  • the computation layer [lOOd] sends back the KPI data based on the computations to the IPM [100a],
  • the IPM [100a] queries the distributed data lake [lOOu] to fetch the required counter data.
  • the distributed data lake [lOOu] sends the counter data based on the query to the IPM [100a],
  • the IPM [100a] computes the final data to generate the report. This computation is to identify a set of breach conditions based on the defined set of severity breach thresholds in the one or more policies.
  • the severity breach thresholds are defined for the set of counters and the KPIs.
  • the final data computed is in the form of a report which the user can use to analyse the KPIs.
  • the set of breach conditions associated with the counters and the KPIs are identified in an event a current value of each of the counter from the set of counters and each of the KPI from the set of KPIs exceeds a corresponding severity breach threshold from the set of severity breach thresholds.
  • the IPM [100a] establishes a connection with the mail server [502] to send the report to the user [602],
  • the IPM [100a] sends a request to calibrate the set of breach conditions to the AI/ML [504],
  • the breach conditions are calibrated by the AI/ML [504] based on the severity breach thresholds.
  • the calibration is based on a set of factors including but may not be limited to a weather, a holiday and a disaster. As can be understood, these are external factors and therefore may change from time to time.
  • the AI/ML [504] is further configured to modify the severity breach thresholds for the set of policies.
  • the AI/ML [504] sends a request to the DDL [lOOu], to save the modified severity breach thresholds in the one or more policies.
  • the mail server [502] sends a notification via mail to all users based on the email group information mentioned in the one or more policies.
  • FIG. 8 an exemplary implementation of a signal flow diagram [800] for showing a highlighted result to the user based on user request, in accordance with exemplary implementations of the present disclosure is shown.
  • Step 1 the user [602] sends a request to the UI unit [316] to show the generated report or the result.
  • the UI unit [316] sends the request to the load balancer [100k] to fetch the generated report.
  • the load balancer [100k] forwards the request to the IPM [100a] to fetch the report.
  • the IPM [100a] fetches the severity threshold based on the request, from the distributed data lake [100u], [0140] Further at Step 5, the IPM [100a] sends the delta KPI data to the load balancer [100k],
  • the present disclosure also provides the delta for a user chosen dates via step 1, which utilizes the stored pre-computed KPI data to perform the real-time calculation and output delivery.
  • Step 6 the load balancer [100k] forwards the data to the UI unit [316],
  • the UI unit [316] displays a highlighted report comprising the delta KPI data to the user [602],
  • a KPI can be created to assess a success rate of request delivery, based on the counters that collect metrics related to number of requests delivered and number of requests failed to be delivered.
  • the severity breach thresholds may be defined as follows: If Success Rate > 99.5% then “no breach condition”)
  • the report will highlight this in dark red color. Similarly, if the identified breach condition falls in the severity defined as “major”, then the report will highlight this in red color. And, if the identified breach condition falls in the severity defined as “warning”, then the report will highlight this in orange color.
  • the present disclosure further discloses a User Equipment (UE).
  • the UE comprises a user interface unit [316], The user interface unit [316] is configured to create, one or more policies comprising a set of counters and a set of KPIs.
  • the UE comprises a transceiver unit to send a request to a load balancer to save the one or more policies.
  • the transceiver unit is further configured to send a request, for fetching a result for the set of counters and the set of KPIs.
  • the transceiver unit is further configured to receive, a report comprising the result for the set of counters and the set of KPIs.
  • the result comprises one or more highlights for one or more breach conditions.
  • a system [300] comprising a transceiver unit [302], configured to transmit, from a cron scheduler [304], a request for execution of the one or more policies at a pre-defined interval to an integrated performance management (IPM) [100a],
  • the transceiver unit [306] configured to receive, at the IPM [100a], a request for the report comprising the set of counters and the set of KPIs.
  • the system [300] comprises an identification unit [308], configured to identify at the IPM [100a], a set of policies from the one or more policies comprising the set of counters and the set of KPIs.
  • the system [300] comprises an evaluation unit [310], configured to evaluate at the IPM [100a], the set of policies comprising the set of counters and the set of KPIs based on a set of severity breach thresholds.
  • the system [300] further comprises the identification unit [308], configured to identify at the IPM [100a], a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on a set of severity breach thresholds.
  • the system [300] further comprises a report generation unit [312], configured to generate at the IPM [100a], one or more reports comprising the set of breach conditions, wherein the breach conditions are calibrated based on the severity breach thresholds.
  • the transceiver unit [306] is configured to send from the IPM [100a], the one or more reports to the user interface unit [316] of the UE based on the set of policies.
  • the present disclosure further discloses a non-transitory computer readable storage medium storing instructions for counters and key performance indicator (KPIs) policy management in a network
  • the instructions include executable code which, when executed by one or more units of a system, cause a transceiver unit [302] to transmit, from a cron scheduler [304], a request for execution of one or more policies at a pre-defined interval to an integrated performance management (IPM) [100a],
  • IPM integrated performance management
  • the instructions when executed by the system further cause the transceiver unit to [306] receive, at the IPM [100a], a request for a report comprising a set of counters and a set of KPIs.
  • the instructions when executed by the system further cause an identification unit [308] to identify at the IPM [100a], a set of policies from the one or more policies comprising the set of counters and the set of KPIs.
  • the instructions when executed by the system further cause an evaluation unit [310] to evaluate at the IPM [100a], the set of policies comprising the set of counters and the set of KPIs based on a set of severity breach thresholds.
  • the instructions when executed by the system further cause the identification unit [308] to identify at the IPM [100a], a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on a set of severity breach thresholds.
  • the instructions when executed by the system further cause a report generation unit [312] to generate at the IPM [100a], one or more reports comprising the set of breach conditions, wherein the breach conditions are calibrated based on the severity breach thresholds.
  • the instructions when executed by the system further cause the transceiver unit [306] to send from the IPM [100a], the one or more reports to one or more users based on the set of policies.
  • KPIs key performance indicator
  • the present disclosure reduces the grunt work and automate the tasks which need to be performed after having observed any kind of breaches.
  • the present disclosure further provides a solution through which one single policy for a Counter or KPI gets applied across many of the IPM modules like in live monitoring, report generation without extra efforts.
  • the present disclosure devises a tool to calibrate the thresholds of the policies according to the weather, holiday, and disasters to overcome the unforeseen turn of events.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present disclosure relates to a method and a system for counters and KPIs policy management in a network The method comprises transmitting from a cron scheduler [304], a request for execution of one or more policies to an integrated performance management (IPM) [100a]. The method comprises receiving at the IPM [100a], a request for a report comprising a set of counters and KPIs. The method comprises identifying at the IPM [100a], a set of policies. The method comprises evaluating at the IPM [100a], the set of policies based on a set of severity breach thresholds. The method comprises identifying at the IPM [100a], a set of breach conditions based on the set of severity breach thresholds. The method comprises generating at the IPM [100a] one or more reports. The method comprises sending from the IPM [100a], the one or more reports to one or more users.

Description

METHOD AND SYSTEM FOR COUNTERS AND KEY PERFORMANCE INDICATORS (KPIs) POLICY MANAGEMENT IN A NETWORK
FIELD OF THE DISCLOSURE
[0001] Embodiments of the present disclosure generally relate to network management systems. More particularly, embodiments of the present disclosure relate to counters and key performance indicators (KPIs) policy management in a network.
BACKGROUND
[0002] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Network performance management systems typically track network elements and data from network monitoring tools and combine and process such data to determine key performance indicators (KPI) of the network. Integrated performance management systems provide the means to visualize the network performance data so that network operators and other relevant stakeholders are able to identify the service quality of the overall network, and individual/ grouped network elements. By having an overall as well as detailed view of the network performance, the network operators can detect, diagnose and remedy actual service issues, as well as predict potential service issues or failures in the network and take precautionary measures accordingly.
[0004] Typically, in a mobile network, a network node or network element, such as a base station, an access point (AP), a router, etc. collects event statistics in the form of performance counters and sends them to network performance management system for diagnostic purposes. These performance counters may be logged and maintained by the management system in order to assess the performance of network nodes. In order to catch the abnormalities, the user would need to check the reports on regular basis. These results were also prone to human-error. Thus, there is a need in the art to help the user by reducing the grunt work and automating the tasks which need to be performed after having observed any kind of breaches. [0005] Also, KPI values act as metrics for some real-world problems. The current KPI values are analysed and compared with the past values for getting the trend in terms of the absolute change as well as the percentage change. The people, who perform the monitoring and observation tasks, take note of every kind of changes happening in the KPIs they are held responsible for. Normally, user would download an excel report from a dashboard page and perform some calculations in excel to get the increment or decrement or none type changes for the date he/she has chosen. Graph can be used to visualize the ups and downs in KPIs.
[0006] Thus, there exists an imperative need in the art to provide a system and a method for providing counter and KPI policy management, which the present disclosure aims to address. The present disclosure efficiently brings the time spent in the tedious work to none and allows the user to focus on the task at hand, which is monitoring. This will help the user by reducing the grunt work and automating the tasks which need to be performed after having observed any kind of breaches.
SUMMARY
[0007] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0008] An aspect of the present disclosure may relate to a method for counters and key performance indicators (KPIs) policy management in a network. The method comprises transmitting, by a transceiver unit, from a cron scheduler a request for execution of one or more policies at a pre-defined interval to an integrated performance management (IPM) module. The method further comprises receiving, by the transceiver unit, at the IPM module, a request for a report comprising a set of counters and a set of KPIs. Further, the method comprises identifying, by an identification unit, at the IPM module, a set of policies from the one or more policies comprising the set of counters and the set of KPIs. Furthermore, the method comprises evaluating, by an evaluation unit, at the IPM module, the set of policies comprising the set of counters and the set of KPIs based on a set of severity breach thresholds. Hereinafter, the method comprises identifying, by the identification unit, at the IPM module, a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on the set of severity breach thresholds. Further, the method comprises generating, by a report generation unit, at the IPM, one or more reports comprising the set of breach conditions, wherein the breach conditions are calibrated based on the severity breach thresholds. The method further comprises sending, by the transceiver unit, from the IPM module, the one or more reports to one or more users based on the set of policies.
[0009] In an exemplary aspect of the present disclosure, prior to transmitting the request for execution of one or more policies from the cron scheduler to the IPM, the method comprises creating, at a user interface unit, the one or more policies. Each of the policy from the one or more policies is associated with a data. The method further comprises transmitting, by the user interface unit to the IPM, the one or more policies comprising the data. Further, the method comprises storing, by a storage unit, at the IPM, the one or more polices in a database. Furthermore, the method comprises forwarding, by the transceiver unit, from the IPM to the cron scheduler, a request to schedule the one or more policies based on the data.
[0010] In an exemplary aspect of the present disclosure, the data associated with each of the policy from the one or more policies comprises one or more counters, one or more KPIs, one or more aggregation levels associated with each KPI from the one or more KPIs, a schedule associated with each counter from the one or more counters, a schedule associated with each KPI from the one or more KPIs, one or more severity breach threshold values associated with each of the KPI from the one or more KPIs, one or more severity breach threshold values associated with each of the counter from the one or more counters, one or more notification templates and a user notification group information.
[0011] In an exemplary aspect of the present disclosure, the schedule associated with each counter from the one or more counters and the schedule associated with each KPI from the one or more KPIs comprises a time interval type and a time interval size.
[0012] In an exemplary aspect of the present disclosure, the one or more severity breach threshold values associated with each of the KPI from the one or more KPIs and the one or more severity breach threshold values associated with each of the counter from the one or more counters is associated with one or more severities.
[0013] In an exemplary aspect of the present disclosure, the set of breach conditions associated with the set of counters and the set of KPIs is identified in an event a current value of each of the counter from the set of counters and each of the KPI from the set of KPIs exceeds a corresponding severity breach threshold from the set of severity breach thresholds. [0014] In an exemplary aspect of the present disclosure, the method further comprises sending, by the IPM, the set of breach conditions to a learning module. Further, the method comprises calibrating, by a calibration unit, at the learning module, the severity breach thresholds associated with the set of breach conditions. The calibration is based on a set of factors comprising at least one of a weather, a holiday and a disaster. Furthermore, the method comprises modifying, by the calibration unit, the severity breach thresholds for the set of policies. The method further comprises storing, by the storage unit, by the learning module, the modified severity breach thresholds for the set of policies in the database.
[0015] In an exemplary aspect of the present disclosure, post receiving, by the transceiver unit, at the IPM, the request for the report comprising the set of counters and the set of KPIs, the method comprises running, by an execution unit, at the cron scheduler, a cron for the set of KPIs and the set of counters.
[0016] In an exemplary aspect of the present disclosure, for generating the one or more reports by the report generation unit, at the IPM, the severity breach thresholds are fetched from the database.
[0017] In an exemplary aspect of the present disclosure, the method further comprises triggering, by an alert unit, one or more alarms based on the set of breach conditions.
[0018] In an exemplary aspect of the present disclosure, the one or more reports sent to the one or more users comprises a delta KPI report, wherein the delta relates to the difference in result between the previously sent reports and the generated one or more reports.
[0019] Another aspect of the present disclosure may relate to a system for counters and key performance indicators (KPIs) policy management in a network. The system comprises a transceiver unit. The transceiver unit is configured to transmit, from a cron scheduler, a request for execution of one or more policies at a pre-defined interval to an integrated performance management (IPM). The transceiver unit is further configured to receive, at the IPM, a request for a report comprising a set of counters and a set of KPIs. The system further comprises an identification unit. The identification unit is configured to identify at the IPM, a set of policies from the one or more policies comprising the set of counters and the set of KPIs. Further, the system comprises an evaluation unit. The evaluation unit is configured to evaluate at the IPM, the set of policies comprising the set of counters and the set of KPIs based on a set of severity breach thresholds. The identification unit is configured to identify at the IPM, a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on a set of severity breach thresholds. The system further comprises a report generation unit. The report generation unit is configured to generate at the IPM, one or more reports comprising the set of breach conditions. The breach conditions are calibrated based on the severity breach thresholds. The transceiver unit is configured to send from the IPM, the one or more reports to one or more users based on the set of policies.
[0020] Yet another aspect of the present disclosure relates to a User Equipment (UE). The UE comprises a user interface unit. The user interface unit is configured to create, one or more policies comprising a set of counters and a set of KPIs. The UE comprises a transceiver unit to send a request to a load balancer to save the one or more policies. The transceiver unit is further configured to send a request, for fetching a result for the set of counters and the set of KPIs. The transceiver unit is further configured to receive, a report comprising the result for the set of counters and the set of KPIs. The result comprises one or more highlights for one or more breach conditions. The result is generated by a system comprising a transceiver unit, configured to transmit, from a cron scheduler, a request for execution of the one or more policies at a pre-defined interval to an integrated performance management (IPM). The transceiver unit is configured to receive, at the IPM, a request for the report comprising the set of counters and the set of KPIs. The system comprises an identification unit, configured to identify at the IPM, a set of policies from the one or more policies comprising the set of counters and the set of KPIs. The system comprises an evaluation unit, configured to evaluate at the IPM, the set of policies comprising the set of counters and the set of KPIs based on a set of severity breach thresholds. The system further comprises the identification unit, configured to identify at the IPM, a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on a set of severity breach thresholds. The system further comprises a report generation unit, configured to generate at the IPM, one or more reports comprising the set of breach conditions, wherein the breach conditions are calibrated based on the severity breach thresholds. The transceiver unit of the system is further configured to send from the IPM, the one or more reports to the user interface unit of the UE based on the set of policies.
[0021] Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing instructions for counters and key performance indicator (KPIs) policy management in a network, the instructions include executable code which, when executed by one or more units of a system cause a transceiver unit to transmit, from a cron scheduler, a request for execution of one or more policies at a pre-defined interval to an integrated performance management (IPM). The instructions when executed by the system further cause the transceiver unit to receive, at the IPM, a request for a report comprising a set of counters and a set of KPIs. The instructions when executed by the system further cause an identification unit to identify at the IPM, a set of policies from the one or more policies comprising the set of counters and the set of KPIs. The instructions when executed by the system further cause an evaluation unit to evaluate at the IPM, the set of policies comprising the set of counters and the set of KPIs based on a set of severity breach thresholds. The instructions when executed by the system further cause the identification unit to identify at the IPM, a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on a set of severity breach thresholds. The instructions when executed by the system further cause a report generation unit to generate at the IPM, one or more reports comprising the set of breach conditions, wherein the breach conditions are calibrated based on the severity breach thresholds. The instructions when executed by the system further cause the transceiver unit to send from the IPM, the one or more reports to one or more users based on the set of policies.
OBJECTS OF THE DISCLOSURE
[0022] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
[0023] It is an object of the present disclosure to provide a system and a method for providing counter and policy management for creating and scheduling the policies for each KPI individually for regular observation.
[0024] It is another object of the present disclosure to reduce the grunt work and automate the tasks which need to be performed after having observed any kind of breaches.
[0025] It is another object of the present disclosure to provide a solution through which one single policy for a Counter or KPI gets applied across many of the IPM modules like in live monitoring, report generation without extra efforts.
[0026] It is yet another object of the present disclosure to devise a tool to calibrate the thresholds of the policies according to the weather, holiday, and disasters to overcome the unforeseen turn of events. DESCRIPTION OF THE DRAWINGS
[0027] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the method and system according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components.
[0028] FIG. 1A illustrates an exemplary block diagram of a network performance management system.
[0029] FIG. IB illustrates an exemplary block diagram representation of a management and orchestration (MANO) architecture/ platform, in accordance with exemplary implementation of the present disclosure.
[0030] FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
[0031] FIG. 3 illustrates an exemplary block diagram of a system for counters and key performance indicator (KPIs) policy management in a network, in accordance with exemplary implementations of the present disclosure.
[0032] FIG. 4 illustrates a method flow diagram for counters and key performance indicator (KPIs) policy management in a network, in accordance with exemplary implementations of the present disclosure.
[0033] FIG. 5 illustrates an exemplary implementation of the system for counters and key performance indicator (KPIs) policy management in a network, in accordance with exemplary implementations of the present disclosure. [0034] FIG. 6 illustrates an implementation of an exemplary signal flow diagram for creating a policy and starting cron scheduling for the selected KPI and policies, in accordance with exemplary implementations of the present disclosure.
[0035] FIG. 7. illustrates an implementation of a signal flow diagram for counters and key performance indicator (KPIs) policy management in a network, in accordance with exemplary implementations of the present disclosure.
[0036] FIG. 8 illustrates an implementation of an exemplary signal flow diagram for showing a highlighted result to the user based on the user request for delta and KPI data, in accordance with exemplary implementations of the present disclosure is shown.
[0037] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
[0038] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter may each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above.
[0039] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0040] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
[0041] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations may be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure.
[0042] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive — in a manner similar to the term “comprising” as an open transition word — without precluding any additional or other elements.
[0043] As used herein, a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
[0044] As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a smartdevice”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless communication device”, “a mobile communication device”, “a communication device” may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure. The user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure. Also, the user device may contain at least one input means configured to receive an input from at least one of a transceiver unit, a processing unit, a storage unit, a detection unit and any other such unit(s) which are required to implement the features of the present disclosure.
[0045] As used herein, “storage unit” or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media. The storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
[0046] As used herein “interface” or “user interface” refers to a shared boundary across which two or more separate components of a system exchange information or data. The interface may also be referred to a set of rules or protocols that define communication or interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called.
[0047] All modules, units, components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
[0048] As used herein the transceiver unit include at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information or a combination thereof between units/components within the system and/or connected with the system.
[0049] As discussed in the background section, the current known solutions have several shortcomings. The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing method and system of counters and key performance indicator (KPIs) policy management in a network.
[0050] Referring to FIG. 1A, an exemplary block diagram of a network performance management system [100A], in accordance with the exemplary embodiments of the present invention is shown. Referring to Fig. 1A, the network performance management system [100A] comprises various sub-systems such as: an integrated performance management system [100a], a normalization layer [100b], a computation layer [ 1 OOd], an anomaly detection layer [lOOo], a streaming engine [1001], a load balancer [100k], an operations and management system [lOOp], an API gateway system [lOOr], an analysis engine [lOOh], a parallel computing framework [lOOi], a forecasting engine [lOOt], a distributed file system [lOOj], a mapping layer [100s], a distributed data lake [lOOu], a scheduling layer [100g], a reporting engine [100m], a message broker [lOOe], a graph layer [ 1 OOf], a caching layer [100c], a service quality manager [lOOq] and a correlation engine[100n]. Exemplary connections between these subsystems are also shown in FIG. 1 A. However, it will be appreciated by those skilled in the art that the present disclosure is not limited to the connections shown in the diagram, and any other connections between various subsystems that are needed to realise the effects are within the scope of this disclosure.
[0051] Following are the various components of the system [100A], as shown in FIG. 1 A:
Integrated Performance Management (IPM) system [100a] comprises a 5G performance engine [lOOv] and a 5G Key Performance Indicator (KPI) Engine [100w],
5G Performance Management Engine [100v]: The 5G Performance Management engine [lOOv] is a crucial component of the IPM system [100a], responsible for collecting, processing, and managing performance counter data from various data sources within the network. The counter data includes metrics such as connection speed, latency, data transfer rates, and many others. The counter data is then processed and aggregated as required, forming a comprehensive overview of network performance. The processed information is then stored in the Distributed Data Lake [100u], The Distributed data lake [lOOu] is a centralized, scalable, and flexible storage solution, allowing for easy access and further analysis. The 5G Performance Management engine [lOOv] also enables the reporting and visualization of the performance counter data, thus providing network administrators with a real-time, insightful view of the network's operation. Through these visualizations, operators can monitor the network's performance, identify potential issues, and make informed decisions to enhance network efficiency and reliability. An operator in the IPM system [100a] may be an individual, a device, an administrator, and the like who may interact with or manage the network.
5G Key Performance Indicator (KPI) Engine [100w]: The 5G Key Performance Indicator (KPI) Engine [lOOw] is a dedicated component tasked with managing the KPIs of all the network elements. The 5G Key Performance Indicator (KPI) Engine [lOOw] uses the performance counters, which are collected and processed by the 5G Performance Management engine [lOOv] from various data sources. These counters, encapsulating crucial performance data, are harnessed by the KPI engine [lOOw] to calculate essential KPIs. These KPIs may include at least one of: data throughput, latency, packet loss rate, and more. Once the KPIs are computed, the KPIs are segregated based on the aggregation requirements, offering a multilayered and detailed understanding of the network performance. The processed KPI data is then stored in the Distributed Data Lake [lOOu], ensuring a highly accessible, centralized, and scalable data repository for further analysis and utilization. Similar to the 5G Performance Management engine [lOOv], the 5G KPI engine [lOOw] is also responsible for reporting and visualization of KPI data. This functionality allows network administrators to gain a comprehensive, visual understanding of the network's performance, thus supporting informed decision-making and efficient network management.
Ingestion layer: The Ingestion layer (not shown in FIG. 1A) forms a key part of the IPM system [100a], The ingestion layer primarily performs the function to establish an environment capable of handling diverse types of incoming data. This data may include Alarms, Counters, Configuration parameters, Call Detail Records (CDRs), Infrastructure metrics, Logs, and Inventory data, all of which are crucial for maintaining and optimizing the network's performance. Upon receiving this data, the Ingestion layer processes the data by validating the data integrity and correctness to ensure that the data is fit for further use. Following the validation, the data is routed to various components of the IPM system [100a], including the Normalization layer [100b], Streaming Engine [1001], Streaming Analytics, and Message Brokers [100e], The destination is chosen based on where the data is required for further analytics and processing. By serving as the first point of contact for incoming data, the Ingestion layer plays a vital role in managing the data flow within the system, thus supporting comprehensive and accurate network performance analysis.
Normalization layer [100b]: The Normalization Layer [100b] serves to standardize, enrich, and store data into the appropriate databases. It takes in data that has been ingested and adjusts it to a common standard, making it easier to compare and analyse. This process of "normalization" reduces redundancy and improves data integrity. Upon completion of normalization, the data is stored in various databases like the Distributed Data Lake [lOOu], Caching Layer [100c], and Graph Layer [1 OOf], depending on its intended use. The choice of storage determines how the data can be accessed and used in the future. Additionally, the Normalization Layer [100b] produces data for the Message Broker [lOOe], a system that enables communication between different parts of the integrated performance management system [100a] through the exchange of data messages. Moreover, the Normalization Layer [100b] supplies the standardized data to several other subsystems. These include the Analysis Engine [lOOh] for detailed data examination, the Correlation Engine [lOOn] for detecting relationships among various data elements, the Service Quality Manager [lOOq] for maintaining and improving the quality of services, and the Streaming Engine [1001] for processing real-time data streams. These subsystems depend on the normalized data to perform their operations effectively and accurately, demonstrating the Normalization Layer's [100b] critical role in the entire system.
Caching layer [100c]: The Caching Layer [100c] in the IPM system [100a] plays a significant role in data management and optimization. During the initial phase, the Normalization Layer [100b] processes incoming raw data to create a standardized format, enhancing consistency and comparability. The Normalizer Layer then inserts this normalized data into various databases. One such database is the Caching Layer [100c], The Caching Layer [100c] is a highspeed data storage layer which temporarily holds data that is likely to be reused, to improve speed and performance of data retrieval. By storing frequently accessed data in the Caching Layer [100c], the system significantly reduces the time taken to access this data, improving overall system efficiency and performance. Further, the Caching Layer [100c] serves as an intermediate layer between the data sources and the sub-systems, such as the Analysis Engine, Correlation Engine [lOOn], Service Quality Manager, and Streaming Engine. The Normalization Layer [100b] is responsible for providing these sub-systems with the necessary data from the Caching Layer [100c],
Computation layer [lOOd] : The Computation Layer [lOOd] in the IPM system [100a] serves as the main hub for complex data processing tasks. In the initial stages, raw data is gathered, normalized, and enriched by the Normalization Layer [100b], The Normalizer Layer [100b] then inserts this standardized data into multiple databases including the Distributed Data Lake [lOOu], Caching Layer [100c], and Graph Layer [ 1 OOf], and also feeds it to the Message Broker [100e], Within the Computation Layer [lOOd], several powerful sub-systems such as the Analysis Engine [ 1 OOh], Correlation Engine [ 1 OOn], Service Quality Manager [ 1 OOq], and the Streaming Engine [1001], utilize the normalized data. These systems are designed to execute various data processing tasks. The Analysis Engine [lOOh] performs in-depth data analytics to generate insights from the data. The Correlation Engine [lOOn] identifies and understands the relations and patterns within the data. The Service Quality Manager [lOOq] assesses and ensures the quality of the services. And the Streaming Engine [1001] processes and analyses the real-time data feeds. In essence, the Computation Layer [lOOd] is where all major computation and data processing tasks occur. It uses the normalized data provided by the Normalization Layer [100b], processing it to generate useful insights, ensure service quality, understand data patterns, and facilitate real-time data analytics.
Message broker [100e]: The Message Broker [lOOe], an integral part of the IPM system [100a], operates as a publish-subscribe messaging system. It orchestrates and maintains the real-time flow of data from various sources and applications. At its core, the Message Broker [lOOe] facilitates communication between data producers and consumers through messagebased topics. This creates an advanced platform for contemporary distributed applications. With the ability to accommodate a large number of permanent or ad-hoc consumers, the Message Broker [lOOe] demonstrates immense flexibility in managing data streams. Moreover, it leverages the filesystem for storage and caching, boosting its speed and efficiency. The design of the Message Broker [lOOe] is centred around reliability. It is engineered to be fault-tolerant and mitigate data loss, ensuring the integrity and consistency of the data. With its robust design and capabilities, the Message Broker [lOOe] forms a critical component in managing and delivering real-time data in the system.
Graph layer [lOOf] : The Graph Layer [ 1 OOf] plays a pivotal role in the IPM system [100a], It can model a variety of data types, including alarm, counter, configuration, CDR data, Inframetric data, 5G Probe Data, and Inventory data. Equipped with the capability to establish relationships among diverse types of data, The Graph Layer [lOOf] acts as a Relationship Modeler that offers extensive modelling capabilities. For instance, it can model Alarm and Counter data, probe and Alarm data, elucidating their interrelationships. Moreover, the Relationship Modeler should adapt at processing steps provided in the model and delivering the results to the system requested, whether it be a Parallel Computing system, Workflow Engine, Query Engine, Correlation Engine [lOOn], 5G Performance Management Engine, or 5G KPI Engine [100w], With its powerful modelling and processing capabilities, the Graph Layer [lOOf] forms an essential part of the system, enabling the processing and analysis of complex relationships between various types of network data.
Scheduling layer [100g]: The Scheduling Layer [100g] serves as a key element of the IPM System [100a], endowed with the ability to execute tasks at predetermined intervals set according to user preferences. A task might be an activity performing a service call, an API call to another microservice, the execution of an Elastic Search query, and storing its output in the Distributed Data Lake [lOOu] or Distributed File System or sending it to another microservice. The micro-service refers to a single system architecture to provide multiple functions. Some of the microservices in communication are API calls and remote procedure calls. The versatility of the Scheduling Layer [100g] extends to facilitating graph traversals via the Mapping Layer to execute tasks. This crucial capability enables seamless and automated operations within the system, ensuring that various tasks and services are performed on schedule, without manual intervention, enhancing the system's efficiency and performance. In sum, the Scheduling Layer [100g] orchestrates the systematic and periodic execution of tasks, making it an integral part of the efficient functioning of the entire system.
Analysis Engine [100h]: The Analysis Engine [lOOh] forms a crucial part of the IPM System [100a], designed to provide an environment where users can configure and execute workflows for a wide array of use-cases. This facility aids in the debugging process and facilitates a better understanding of call flows. With the Analysis Engine [lOOh], users can perform queries on data sourced from various subsystems or external gateways. This capability allows for an in- depth overview of data and aids in pinpointing issues. The system's flexibility allows users to configure specific policies aimed at identifying anomalies within the data. When these policies detect abnormal behaviour or policy breaches, the system sends notifications, ensuring swift and responsive action. In essence, the Analysis Engine [lOOh] provides a robust analytical environment for systematic data interrogation, facilitating efficient problem identification and resolution, thereby contributing significantly to the system's overall performance management.
Parallel Computing Framework [lOOi] : The Parallel Computing Framework [lOOi] is a key aspect of the Integrated Performance Management System [100a], providing a user-friendly yet advanced platform for executing computing tasks in parallel. The parallel computing framework [lOOi] showcases both scalability and fault tolerance, crucial for managing vast amounts of data. Users can input data via Distributed File System (DFS) [lOOj] locations or Distributed Data Lake (DDL) indices. The framework supports the creation of task chains by interfacing with the Service Configuration Management (SCM) Sub-System. Each task in a workflow is executed sequentially, but multiple chains can be executed simultaneously, optimizing processing time. To accommodate varying task requirements, the service supports the allocation of specific host lists for different computing tasks. The Parallel Computing Framework [lOOi] is an essential tool for enhancing processing speeds and efficiently managing computing resources, significantly improving the system's performance management capabilities.
Distributed File System [100j]: The Distributed File System (DFS) [lOOj] is a critical component of the Integrated Performance Management System [100a], enabling multiple clients to access and interact with data seamlessly. The Distributed File system [lOOj] is designed to manage data files that are partitioned into numerous segments known as chunks. In the context of a network with vast data, the DFS [ 1 OOj ] effectively allows for the distribution of data across multiple nodes. This architecture enhances both the scalability and redundancy of the system, ensuring optimal performance even with large data sets. DFS [lOOj] also supports diverse operations, facilitating the flexible interaction with and manipulation of data. This accessibility is paramount for a system that requires constant data input and output, as is the case in a robust performance management system.
Load Balancer [100k]: The Load Balancer (LB) [100k] is a vital component of the Integrated Performance Management System [100a], designed to efficiently distribute incoming network traffic across a multitude of backend servers or microservices. Its purpose is to ensure the even distribution of data requests, leading to optimized server resource utilization, reduced latency, and improved overall system performance. The LB [100k] implements various routing strategies to manage traffic. The LB [100k] includes round-robin scheduling, header-based request dispatch, and context-based request dispatch. Round-robin scheduling is a simple method of rotating requests evenly across available servers. In contrast, header and contextbased dispatching allow for more intelligent, request-specific routing. Header-based dispatching routes requests based on data contained within the headers of the Hypertext Transfer Protocol (HTTP) requests. Context-based dispatching routes traffic based on the contextual information about the incoming requests. For example, in an event-driven architecture, the LB [100k] manages event and event acknowledgments, forwarding requests or responses to the specific microservice that has requested the event. This system ensures efficient, reliable, and prompt handling of requests, contributing to the robustness and resilience of the overall performance management system. Streaming Engine [1001]: The Streaming Engine [1001], also referred to as Stream Analytics, is a critical subsystem in the Integrated Performance Management System [100a], This engine is specifically designed for high-speed data pipelining to the User Interface (UI). Its core objective is to ensure real-time data processing and delivery, enhancing the system's ability to respond promptly to dynamic changes. Data is received from various connected subsystems and processed in real-time by the Streaming Engine [1001], After processing, the data is streamed to the UI, fostering rapid decision-making and responses. The Streaming Engine [1001] cooperates with the Distributed Data Lake [lOOu], Message Broker [ 1 OOe], and Caching Layer [100c] to provide seamless, real-time data flow. Stream Analytics is designed to perform required computations on incoming data instantly, ensuring that the most relevant and up-to- date information is always available at the UI. Furthermore, this system can also retrieve data from the Distributed Data Lake [lOOu], Message Broker [ 1 OOe], and Caching Layer [100c] as per the requirement and deliver it to the UI in real-time. The streaming engine's [1001] is configured to provide fast, reliable, and efficient data streaming, contributing to the overall performance of the Integrated Performance Management System [100a],
Reporting Engine [100m]: The Reporting Engine [100m] is a key subsystem of the Integrated Performance Management System [100a], The fundamental purpose of designing the Reporting Engine [100m] is to dynamically create report layouts of API data, catered to individual client requirements, and deliver these reports via the Notification Engine. The REM serves as the primary interface for creating custom reports based on the data visualized through the client's dashboard. These custom dashboards, created by the client through the User Interface (UI), provide the basis for the Reporting Engine [100m] to process and compile data from various interfaces. The main output of the Reporting Engine [100m] is a detailed report generated in Excel format. The Reporting Engine’s [100m] unique capability to parse data from different subsystem interfaces, process it according to the client's specifications and requirements, and generate a comprehensive report makes it an essential component of this performance management system. Furthermore, the Reporting Engine [100m] integrates seamlessly with the Notification Engine to ensure timely and efficient delivery of reports to clients via email, ensuring the information is readily accessible and usable, thereby improving overall client satisfaction and system usability.
The Correlation Engine [100n]: The correlation engine [lOOn] provides provisioning support. A correlation model can be provisioned from UI and associated with single/multiple trigger points to run a particular correlation. It can be triggered automatically as soon as triggers are received from different components in the platform across alarm, counter, KPI, CDR, and metric data against a provisioned source trigger point. The correlation engine [lOOn] also provides hypothesis validation support for an on-demand execution feature for different types of correlation, providing an output that can be visualized on UI or exported from the UI. The correlation engine [lOOn] may use learning models and machine learning algorithms to correlate the alarms with the raw data or clear codes or infrastructure events received from other systems. The correlation engine constantly monitors and compares the collected data with the baseline behaviour to detect any deviations. On any violation, the pre-defined remediation action is triggered in order to maintain network consistency
The Anomaly Detection Layer [lOOo] : The Anomaly Detection Layer [lOOo] is another key subsystem of the IPM system [100a], The fundamental purpose of the Anomaly detection layer [lOOo] is to identify and detect anomalies. The anomaly detection layer [lOOo] may drill down to the level of the server and precisely identify the problematic elements in the network.
[0052] FIG. IB illustrates an exemplary block diagram representation of a management and orchestration (MANO) architecture/ platform [100B], in accordance with exemplary implementation of the present disclosure. The MANO architecture [100B] is developed for managing telecom cloud infrastructure automatically, managing design or deployment design, managing instantiation of network node(s)/ service(s) etc. The MANO architecture [100B] deploys the network node(s) in the form of Virtual Network Function (VNF) and Cloud-native/ Container Network Function (CNF). The system may comprise one or more components of the MANO architecture [ 100B] . The MANO architecture [100B] is used to auto-instantiate the VNFs into the corresponding environment of the present disclosure so that it could help in onboarding other vendor(s) CNFs and VNFs to the platform.
[0053] As shown in FIG. IB, the MANO architecture [100B] comprises a user interface layer, a network function virtualization (NFV) and software defined network (SDN) design function module [104], a platforms foundation services module [106], a platform core services module [108] and a platform resource adapters and utilities module [112], All the components are assumed to be connected to each other in a manner as obvious to the person skilled in the art for implementing features of the present disclosure. [0054] The NFV and SDN design function module [104] comprises a VNF lifecycle manager (compute) [1042], a VNF catalog [1044], a network services catalog [1046], a network slicing and service chaining manager [1048], a physical and virtual resource manager [1050] and a CNF lifecycle manager [1052], The VNF lifecycle manager (compute) [1042] is responsible for deciding on which server of the communication network, the microservice will be instantiated. The VNF lifecycle manager (compute) [1042] may manage the overall flow of incoming/ outgoing requests during interaction with the user. The VNF lifecycle manager (compute) [1042] is responsible for determining which sequence to be followed for executing the process. For e.g. in an AMF network function of the communication network (such as a 5G network), sequence for execution of processes Pl and P2 etc. The VNF catalog [1044] stores the metadata of all the VNFs (also CNFs in some cases). The network services catalog [1046] stores the information of the services that need to be run. The network slicing and service chaining manager [1048] manages the slicing (an ordered and connected sequence of network service/ network functions (NFs)) that must be applied to a specific networked data packet. The physical and virtual resource manager [1050] stores the logical and physical inventory of the VNFs. Just like the VNF lifecycle manager (compute) [1042], the CNF lifecycle manager [1052] is used for the CNFs lifecycle management.
[0055] The platforms foundation services module [106] comprises a microservices elastic load balancer [1062], an identity & access manager [1064], a command line interface (CLI) [1066], a central logging manager [1068], and an event routing manager [1070], The microservices elastic load balancer [1062] is used for maintaining the load balancing of the request for the services. The identity & access manager [1064] is used for logging purposes. The command line interface (CLI) [1066] is used to provide commands to execute certain processes which require changes during the run time. The central logging manager [1068] is responsible for keeping the logs of every service. These logs are generated by the MANO platform [100B], These logs are used for debugging purposes. The event routing manager [1070] is responsible for routing the events i.e., the application programming interface (API) hits to the corresponding services.
[0056] The platforms core services module [108] comprises NFV infrastructure monitoring manager [1082], an assure manager [1084], a performance manager [1086], a policy execution engine [1088], a capacity monitoring manager [1090], a release management (mgmt.) repository [1092], a configuration manager & GCT [1094], an NFV platform decision analytics [1096], a platform NoSQL DB [1098]; a platform schedulers and cron jobs [1100], a VNF backup & upgrade manager [1102], a micro service auditor (MAUD) [1104], and a platform operations, administration and maintenance manager [1106], The NFV infrastructure monitoring manager [1082] monitors the infrastructure part of the NFs. For e.g., any metrics such as CPU utilization by the VNF. The assure manager [1084] is responsible for supervising the alarms the vendor is generating. The performance manager [1086] is responsible for managing the performance counters. The policy execution engine (PEGN) [1088] is responsible for all the managing the policies. The capacity monitoring manager (CMM) [1090] is responsible for sending the request to the PEGN [1088], The release management (mgmt.) repository (RMR) [1092] is responsible for managing the releases and the images of all the vendor network node. The configuration manager & (GCT) [1094] manages the configuration and GCT of all the vendors. The NFV platform decision analytics (NPDA) [1096] helps in deciding the priority of using the network resources. It is further noted that the policy execution engine (PEGN) [1088], the configuration manager & GCT [1094] and the NPDA [1096] work together. The platform NoSQL DB [1098] is a database for storing all the inventory (both physical and logical) as well as the metadata of the VNFs and CNF. The platform schedulers and cron jobs [1100] schedules the task such as but not limited to triggering of an event, traverse the network graph etc. The VNF backup & upgrade manager [1102] takes backup of the images, binaries of the VNFs and the CNFs and produces those backups on demand in case of server failure. The micro service auditor [1104] audits the microservices. For e.g., in a hypothetical case, instances not being instantiated by the MANO architecture [100B] using the network resources then the micro service auditor [1104] audits and informs the same so that resources can be released for services running in the MANO architecture [100B], thereby assuring the services only run on the MANO platform [100B], The platform operations, administration and maintenance manager [1106] is used for newer instances that are spawning.
[0057] The platform resource adapters and utilities module [112] further comprises a platform external API adaptor and gateway [1122]; a generic decoder and indexer (XML, CSV, JSON) [1124]; a service adaptor [1126]; an API adapter [1128]; and a NFV gateway [1130], The platform external API adaptor and gateway [1122] is responsible for handling the external services (to the MANO platform [100B]) that requires the network resources. The generic decoder and indexer (XML, CSV, JSON) [1124] gets directly the data of the vendor system in the XML, CSV, JSON format. The service adaptor [1126] is the interface provided between the telecom cloud and the MANO architecture [100B] for communication. The API adapter [1128] is used to connect with the virtual machines (VMs). The NFV gateway [1130] is responsible for providing the path to each service going to/incoming from the MANO architecture [100B],
[0058] FIG. 2 illustrates an exemplary block diagram of a computing device [200] upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure. In an implementation, the computing device [200] may also implement a method for counters and key performance indicator (KPIs) policy management in a network, utilising the system. In another implementation, the computing device [200] itself implements the method for counters and key performance indicator (KPIs) policy management in a network using one or more units configured within the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
[0059] The computing device [200] may include a bus [202] or other communication mechanism for communicating information, and a hardware processor [204] coupled with bus [202] for processing information. The hardware processor [204] may be, for example, a general-purpose microprocessor. The computing device [200] may also include a main memory [206], such as a random access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204], The main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [204], Such instructions, when stored in non-transitory storage media accessible to the processor [204], render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions. The computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204],
[0060] A storage device [210], such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and instructions. The computing device [200] may be coupled via the bus [202] to a display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user. An input device [214], including alphanumeric and other keys, touch screen input means, etc. may be coupled to the bus [202] for communicating information and command selections to the processor [204], Another type of user input device may be a cursor controller [216], such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [204], and for controlling cursor movement on the display [212], This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane. [0061] The computing device [200] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [200] causes or programs the computing device [200] to be a special-purpose machine. According to one implementation, the techniques herein are performed by the computing device [200] in response to the processor [204] executing one or more sequences of one or more instructions contained in the main memory [206], Such instructions may be read into the main memory [206] from another storage medium, such as the storage device [210], Execution of the sequences of instructions contained in the main memory [206] causes the processor [204] to perform the process steps described herein. In alternative implementations of the present disclosure, hard-wired circuitry may be used in place of or in combination with software instructions.
[0062] The computing device [200] also may include a communication interface [218] coupled to the bus [202], The communication interface [218] provides a two-way data communication coupling to a network link [220] that is connected to a local network [222], For example, the communication interface [218] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, the communication interface [218] sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
[0063] The computing device [200] can send messages and receive data, including program code, through the network(s), the network link [220] and the communication interface [218], In the Internet example, a server [230] might transmit a requested code for an application program through the Internet [228], the ISP [226], the local network [222], the host [224] and the communication interface [218], The received code may be executed by the processor [204] as it is received, and/or stored in the storage device [210], or other non-volatile storage for later execution.
[0064] The present disclosure is implemented by a system [300] (as shown in FIG. 3). In an implementation, the system [300] may include the computing device [200] (as shown in FIG. 2). It is further noted that the computing device [200] is able to perform the steps of a method [400] (as shown in FIG. 4). [0065] Referring to FIG. 3, an exemplary block diagram of a system [300] for providing counters and key performance indicators (KPIs) policy management in a network is shown, in accordance with the exemplary implementations of the present disclosure. The system [300] comprises at least one transceiver unit [302], and at least one execution unit [322] in at least one cron scheduler [304], The system [300] further comprises at least one transceiver unit [306], at least one identification unit [308], at least one evaluation unit [310], at least one report generation unit [312], at least one storage unit [314] and at least one alarm unit [324] in at least one IPM [100a], The system further comprises at least one calibration unit [318] in at least one learning module [320], Also, all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the FIG. 3, all units shown within the system should also be assumed to be connected to each other. Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise multiple such units or the system [300] may comprise any such numbers of said units, as required to implement the features of the present disclosure. Further, in an implementation, the system [300] may be present in a user device to implement the features of the present disclosure. The system [300] may be a part of the user device / or may be independent of but in communication with the user device (may also be referred herein as a UE). In another implementation, the system [300] may reside in a server or a network entity. In yet another implementation, the system [300] may reside partly in the server/ network entity and partly in the user device.
[0066] The system [300] is configured for counters and key performance indicators (KPIs) policy management in a network, with the help of the interconnection between the components/units of the system [300],
[0067] Prior to transmitting a request for execution of one or more policies from the cron scheduler [304] to the IPM [100a], the user interface unit [316] at the UE is configured to create the one or more policies. Each of the policy from the one or more policies is associated with a data. The data associated with each of the policy from the one or more policies includes but may not be limited to one or more counters, one or more KPIs, one or more aggregation levels associated with each KPI from the one or more KPIs, a schedule associated with each counter from the one or more counters, a schedule associated with each KPI from the one or more KPIs, one or more severity breach threshold values associated with each of the KPI from the one or more KPIs, one or more severity breach threshold values associated with each of the counter from the one or more counters and an email group to receive a KPI report. The policies can be created and scheduled for each KPI individually for regular observation. [0068] The one or more counters refers to raw metrics which is collected from various network entities to detect a specific event. For example, the number of times a request fails to be delivered or a number of times a response is not received. The one or more KPIs are created from the one or more counters. For example, a KPI can be created to assess a success rate of request delivery, based on the counters that collect metrics related to number of requests delivered and number of requests failed to be delivered. The aggregation levels associated with the one or more KPIs refers to a network geographical area such circle, blade, instance, cluster, etc. The aggregation levels are defined by users. This determines at what granular level the user wants to analyse the KPIs. Here, the user may be a system operator, a network operator, and the like. The schedule is defined as time period to measure the counters and KPIs. The schedule associated with each counter from the one or more counters and the schedule associated with each KPI from the one or more KPIs includes but may not be limited to a time interval type and a time interval size. Moreover, for a single KPI, multiple users can schedule their policies at different aggregation levels. One can include any number of counters from a network node in a policy and schedule it at any level. Further, the one or more notification templates may refer to specific formats in which the user wishes to receive the reports. The user notification group information may comprise an email group to which the reports need to be delivered. The notification group information is not limited to emails, but may also comprise phone numbers, IP address, etc. of the users. Users can choose the email group to which the generated KPI report needs to be sent.
[0069] The one or more severity breach threshold refers to a predefined limit which defines a value above or below which a breach condition occurs. The values as defined for each KPI are referred to as severity breach thresholds. For example, for the success rate KPI, the severity breach thresholds may be defined as follows:
If Success Rate > 99.5% then “no breach condition”)
If Success Rate >99% and <99.5%, then breach condition is detected with threshold severity defined as “warning”
If Success Rate <99%, then the breach condition is detected with threshold severity defined as “major”
If Success Rate <80%, then the breach condition is detected with threshold severity defined as “critical”. [0070] The user interface unit [316] is further configured to transmit to the IPM [100a], the one or more policies comprising the data. Further, the storage unit [314] at the IPM [100a] is configured to store the one or more polices in a database. In one example, the database is the distributed data lake (DDL) [lOOu] as depicted in FIG. 1A.
[0071] Once, the policies are created and stored at the IPM [100a], the transceiver unit [302] at the cron scheduler [304] is configured to transmit a request for execution of one or more policies at a pre-defined interval to the IPM [100a], The pre-defined interval is a periodical time period which defines when the policies should be executed and may be defined by the user. In an implementation, the pre-defined interval may be 1 hour, where the counter data may be requested by the user.
[0072] The transceiver unit [306] at the IPM [100a] receives a request for a report. In one example, the request includes but may not be limited to a set of counters and a set of KPIs for which a policy is to be executed.
[0073] Post receiving the request for the report comprising the set of counters and the set of KPIs, the execution unit [322] is configured to run at the cron scheduler [304], a cron for the policy comprising the set of KPIs and the set of counters. The cron refers to a time-based task scheduler. The cron allows the users to schedule tasks at pre-defined intervals of time. The cron may periodically execute the policy or policies needed to analyse the set of counters and the set of KPIs to generate the report.
[0074] Based on the received set of counters and the set of KPIs, as requested by the user, the identification unit [308] is configured to identify a set of policies from the one or more policies which are defined for the received set of counters and the set of KPIs. The identification refers to selecting a relevant set of policies from the one or more policies based on the set of counters and the set of KPIs. The policies can be created and scheduled for each KPI individually for regular observation. Therefore, when the cron scheduler [304] sends a request to execute a policy based on the counters and KPIs as provided by the user, the IPM [100a] picks the policies that have been created for the counters and KPIs as requested by the user.
[0075] The evaluation unit [310] at the IPM [100a], is configured to evaluate the set of policies including the set of counters and the set of KPIs based on a set of severity breach thresholds. The one or more severity breach threshold values associated with each of the KPI from the one or more KPIs and the one or more severity breach threshold values associated with each of the counter from the one or more counters is associated with one or more severities. The breach conditions associated with the set of counters and the set of KPIs is identified in an event a current value of each of the counter from the set of counters and each of the KPI from the set of KPIs exceeds/falls below a corresponding severity breach threshold from the set of severity breach thresholds as defined in the policies. In an implementation, the one or more severities may be warning, major and critical.
[0076] The identification unit [308] is configured to identify at the IPM [100a], a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on the set of severity breach thresholds. For example, for the success rate KPI, the severity breach thresholds may be defined as follows:
If Success Rate > 99.5% then “no breach condition”)
If Success Rate >99% and <99.5%, then breach condition is detected with threshold severity defined as “warning”
If Success Rate <99%, then the breach condition is detected with threshold severity defined as “major”
If Success Rate <80%, then the breach condition is detected with threshold severity defined as “critical”.
[0077] The alert unit [324] is configured to trigger one or more alarms based on the identified set of breach conditions. In one example, the one or more alarms may be one of a warning alarm, a major alarm and a critical alarm. Each of the one or more alarms is associated with a severity level, indicating the seriousness of the breach. For example, the warning alarm might be for low severity, the critical alarm may be for high severity. The values of KPIs and counters falling beyond the thresholds which result in threshold breaches, are highlighted according to the severity. These severity breaches then can be used for several purposes including but not limited to notifying the user, raising an alarm.
[0078] Further, the report generation unit [312] is configured to generate at the IPM [100a], one or more reports comprising the set of breach conditions. For generating the one or more reports by the report generation unit [312], at the IPM [100A], the severity breach thresholds are fetched from the database. [0079] Further, the transceiver unit [306] is configured to send from the IPM [100a], the one or more reports to one or more users based on the set of policies. The one or more reports sent to the one or more users includes but may not be limited to a delta KPI report. The delta relates to the difference in result between the previously sent reports and the generated one or more reports. The system [300], also provides the delta for a user chosen dates, where the system [300] may utilize the stored pre-computed KPI data to perform the real-time calculation and output delivery. For example, if the user has already received and downloaded a report with details about 2 KPIs, then the IPM [100a], will not send the report comprising these 2 KPIs, but will only send the difference, that is the report for KPIs which the user has not received.
[0080] In an implementation of the present disclosure, after generation of the report, the IPM [100a] interacts with a mail server to send the generated report to the one or more users. The policies created by the user comprise an email group which should receive the report. The IPM [100a] interacts with the mail server and sends the generated report to the email group configured in the policy.
[0081] Further, the IPM [100a], sends the set of breach conditions identified to a learning module where the breach conditions are calibrated by the calibration unit [318] based on the severity breach thresholds. The calibration is based on a set of factors including but may not be limited to a weather, a holiday and a disaster. As can be understood, these are external factors and therefore may change from time to time. Based on the changing factors, the calibration unit [318] is further configured to modify the severity breach thresholds for the set of policies. To calibrate, the calibration unit [318] may measure the counters and calculate the KPIs and based on the geographical conditions, time and other factors and accordingly calibrate the threshold values and accordingly modify the severity breach thresholds for the set of policies. In one example, during day, a success rate KPI threshold increases, while the success rate KPI threshold may decrease during night. The storage unit [314] is configured to store the modified severity breach thresholds for the set of policies in the database.
[0082] Referring to FIG. 4, an exemplary method flow diagram [400] for counters and key performance indicator (KPIs) policy management in a network, in accordance with exemplary implementations of the present disclosure is shown. In an implementation the method [400] is performed by the system [300], Further, in an implementation, the system [300] may be present in a server device to implement the features of the present disclosure. Also, as shown in FIG. 4, the method [400] starts at step [402], [0083] At step [404], the method comprises transmitting, by a transceiver unit [302], from a cron scheduler [304], a request for execution of one or more policies at a pre-defined interval to an integrated performance management (IPM) [100a], In an implementation, the pre-defined interval may be 1 hour, where the counter data may be requested by the user.
[0084] It is to be noted that prior to transmitting the request for execution of one or more policies from the cron scheduler [304] to the IPM [100a], the user interface unit [316] creates the one or more policies. Each of the policy from the one or more policies is associated with a data. The data associated with each of the policy from the one or more policies includes but may not be limited to one or more counters, one or more KPIs, one or more aggregation levels associated with each KPI from the one or more KPIs, a schedule associated with each counter from the one or more counters, a schedule associated with each KPI from the one or more KPIs, one or more severity breach threshold values associated with each of the KPI from the one or more KPIs, one or more severity breach threshold values associated with each of the counter from the one or more counters, one or more notification templates and a user notification group information. The policies can be created and scheduled for each KPI individually for regular observation.
[0085] The one or more counters refers to raw metrics which is collected from various network entities to detect a specific event. For example, the number of times a request fails to be delivered or a number of times a response is not received. The one or more KPIs are created from the one or more counters. For example, a KPI can be created to assess a success rate of request delivery, based on the counters that collect metrics related to number of requests delivered and number of requests failed to be delivered. The aggregation levels associated with the one or more KPIs refers to a network geographical area such circle, blade, etc. The aggregation levels are defined by users. This determines at what granular level the user wants to analyse the KPIs. Here, the user may be a system operator, a network operator, and the like. The schedule is defined as time period to measure the counters and KPIs. The schedule associated with each counter from the one or more counters and the schedule associated with each KPI from the one or more KPIs includes but may not be limited to a time interval type and a time interval size. Moreover, for a single KPI, multiple users can schedule their policies at different aggregation levels. One can include any number of counters from a network node in a policy and schedule it at any level. The one or more notification templates may refer to specific formats in which the user wishes to receive the reports. The user notification group information may comprise an email group to which the reports need to be delivered. The notification group information is not limited to emails, but may also comprise phone numbers, IP address, etc. of the users. Users can choose the email group to which the generated KPI report needs to be sent.
[0086] The one or more severity breach threshold refers to a predefined limit which defines a value above or below which a breach condition occurs. The values as defined for each KPI are referred to as severity breach thresholds. For example, for the success rate KPI, the severity breach thresholds may be defined as follows:
If Success Rate > 99.5% then “no breach condition”)
If Success Rate >99% and <99.5%, then breach condition is detected with threshold severity defined as “warning”
If Success Rate <99%, then the breach condition is detected with threshold severity defined as “major”
If Success Rate <80%, then the breach condition is detected with threshold severity defined as “critical”.
[0087] The storage unit [314] stores the one or more polices in a database. In one example, the database is the distributed data lake (DDL) [lOOu] as depicted in FIG. 1 A.
[0088] At step [406], the method comprises receiving, by the transceiver unit [306], at the IPM [100a], a request for a report comprising a set of counters and a set of KPIs. Post receiving the request for the report comprising the set of counters and the set of KPIs, the execution unit [322] is configured to run at the cron scheduler [304], a cron for the set of KPIs and the set of counters. The cron refers to a time-based task scheduler. The cron allows the users to schedule tasks at predefined intervals of time. The cron may periodically execute the tasks needed to gather the set of counters and the set of KPIs to generate the report.
[0089] At step [408], the method comprises identifying, by an identification unit [308], at the IPM [100a], a set of policies from the one or more policies comprising the set of counters and the set of KPIs. The identification refers to selecting a relevant set of policies from the one or more policies based on the set of counters and the set of KPIs. The policies can be created and scheduled for each KPI individually for regular observation. Therefore, when the cron scheduler [304] sends a request to execute a policy based on the counters and KPIs as provided by the user, the IPM [100a] picks the policies that have been created for the counters and KPIs as requested by the user. [0090] Next at step [410], the method comprises evaluating, by an evaluation unit [310], at the IPM [100a], the set of policies comprising the set of counters and the set of KPIs based on a set of severity breach thresholds. The one or more severity breach threshold values associated with each of the KPI from the one or more KPIs and the one or more severity breach threshold values associated with each of the counter from the one or more counters is associated with one or more severities. The set of breach conditions associated with the set of counters and the set of KPIs is identified in an event a current value of each of the counter from the set of counters and each of the KPI from the set of KPIs exceeds/falls below a corresponding severity breach threshold from the set of severity breach thresholds. In an implementation, the one or more severities may be warning, major and critical.
[0091] Next at step [412], the method comprises identifying, by the identification unit [308], at the IPM [100a], a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on the set of severity breach thresholds. For example, for the success rate KPI, the severity breach thresholds may be defined as follows:
If Success Rate > 99.5% then “no breach condition”)
If Success Rate >99% and <99.5%, then breach condition is detected with threshold severity defined as “warning”
If Success Rate <99%, then the breach condition is detected with threshold severity defined as “major”
If Success Rate <80%, then the breach condition is detected with threshold severity defined as “critical”.
[0092] Next at step [414], the method comprises generating, by a report generation unit [312], at the IPM [100a], one or more reports comprising the set of breach conditions.
[0093] The alert unit [324] triggers one or more alarms based on the set of breach conditions. In one example, the one or more alarms may be one of a warning alarm, a major alarm and a critical alarm. Each of the one or more alarms is associated with a severity level, indicating the seriousness of the breach. For example, the warning alarm might be for low severity, the critical alarm may be for high severity. The values of KPIs and counters falling beyond the thresholds which result in threshold breaches, are highlighted according to the severity. These severity breaches then can be used for several purposes including but not limited to notifying the user, raising an alarm. [0094] Further, at step [416], the method comprises sending, by the transceiver unit [306], from the IPM [100a], the one or more reports to one or more users based on the set of policies. Before sending, the one or more reports may be generated by the report generation unit [312], The one or more reports sent to the one or more users includes but may not be limited to a delta KPI report. The delta relates to the difference in result between the previously sent reports and the generated one or more reports. The method [400], also provides the delta for a user chosen dates, where the method [400 may utilize the stored pre-computed KPI data to perform the real-time calculation and output delivery.
[0095] In an implementation of the present disclosure, after generation of the report, the IPM [100a] interacts with a mail server to send the generated report to the one or more users.
[0096] Further, the IPM [100a], sends the set of breach conditions identified to a learning module where the breach conditions are calibrated by the calibration unit [318] based on the severity breach thresholds. The calibration is based on a set of factors including but may not be limited to a weather, a holiday and a disaster. As can be understood, these are external factors and therefore may change from time to time. Based on the changing factors, the calibration unit [318] is further configured to modify the severity breach thresholds for the set of policies. To calibrate, the calibration unit [318] may measure the counters and calculate the KPIs and based on the geographical conditions, time and other factors and accordingly calibrate the threshold values and accordingly modify the severity breach thresholds for the set of policies. In one example, during day, a success rate KPI threshold increases, while the success rate KPI threshold may decrease during night. The storage unit [314] is configured to store the modified severity breach thresholds for the set of policies in the database.
[0097] The method [400] thereafter terminates at step [418],
[0098] Referring to FIG.5, an exemplary implementation of the system [500] for counters and key performance indicator (KPIs) policy management in a network, in accordance with exemplary implementations of the present disclosure is shown.
[0099] The implementation system [500] comprises the user interface (UI) unit [316] at a User Equipment, the load balancer [100k], the integrated performance management (IPM) [100a], the computational layer [ 1 OOd], the distributed data lake [lOOu], the distributed file system [ 1 OOj ], the cron scheduler [304], a mail server [502] and an artificial intelligence/machine learning layer [504],
[0100] The UI unit [316] may be one of a graphical user interface (GUI), a command line interface, and the like. The GUI refers to an interface to interact with the system [500] by the user by visual or graphical representation of icons, menu, etc. The GUI is an interface that may be used within a smartphone, laptop, computer, etc. The CLI refers to a text-based interface to interact with the system [500] as by the user. The user may input text lines called as command lines in the CLI to access the data in the system. The user creates one or more policies related to counters and KPIs at the UI unit [316], Once the user has finished creating the policies, the user saves the one or more policies. The request to save the one or policies is transmitted by the UI unit [316] to the load balancer [100k] to distribute the one or more policies to one or more instances of the IPM [100a],
[0101] The load balancer (LB) [100k] is a component of the IPM architecture [100A] to efficiently distribute incoming network traffic or requests. The load balancer [100k] ensures even distribution of requests, leading to optimized server resource utilization, reduced latency, and improved overall system performance. The LB [100k] implements various routing strategies to manage traffic. The LB [100k] includes round-robin scheduling, header-based request dispatch, and context-based request dispatch as defined in FIG. 1 A.
[0102] The request to save the one or more policies is further transmitted by the load balancer [100k] to the IPM [100a], The IPM [100a] is configured to collect, process, and manage performance counter data from data sources within the network. The counter data includes metrics such as connection speed, latency, data transfer rates, and many others. The IPM [100a] is further configured to collect the one or more policies and send it for storage to in the Distributed Data Lake [100u], The Distributed data lake [lOOu] is the centralized, scalable, and flexible storage solution, allowing for easy access and further analysis.
[0103] Further, the user can also request for one or more reports for a set of counters and KPIs which the user wants to analyse and observe. The user sends the request from the UI unit [316] to the IPM [100a] via the load balancer [100k], Once the IPM [100a] receives the request for report generation, the IPM [100a] identifies a set of policies comprising the requested set of counters and KPIs. The IPM [100a] then evaluates the set of policies based on a set of severity breach thresholds. The severity breach thresholds are defined in the set of policies for counters and KPIs. Based on the evaluation, the IPM [100a] identifies a set of breach conditions for the requested set of counters and KPIs. Further, based on the identifies set of breach conditions, the IPM [100a] generates one or more reports for the user comprising the set of breach conditions which are calibrated based on the severity breach thresholds. The breach thresholds are highlighted based on the severities identifies by the breach conditions. For example, for the success rate KPI, the severity breach thresholds may be defined as follows:
If Success Rate > 99.5% then “no breach condition”)
If Success Rate >99% and <99.5%, then breach condition is detected with threshold severity defined as “warning”
If Success Rate <99%, then the breach condition is detected with threshold severity defined as “major”
If Success Rate <80%, then the breach condition is detected with threshold severity defined as “critical”.
[0104] In an implementation, if the identified breach condition falls in the severity defined as “critical”, then the report will highlight this in dark red color. Similarly, if the identified breach condition falls in the severity defined as “major”, then the report will highlight this in red color. And, if the identified breach condition falls in the severity defined as “warning”, then the report will highlight this in orange color.
[0105] After the one or reports are generated by the IPM [100a], the IPM [100a] identifies the mail server [502] for sending the reports to the user. The mail server [502] is a system responsible for sending, receiving, and storing emails. The mail server [502] ensures that emails are correctly routed to the users for the one or more policies.
[0106] Further, the IPM [100a], sends the set of breach conditions identified to a learning module where the breach conditions are calibrated by the calibration unit [318] based on the severity breach thresholds. The calibration is based on a set of factors including but may not be limited to a weather, a holiday and a disaster. As can be understood, these are external factors and therefore may change from time to time. Based on the changing factors, the calibration unit [318] is further configured to modify the severity breach thresholds for the set of policies. To calibrate, the calibration unit [318] may measure the counters and calculate the KPIs and based on the geographical conditions, time and other factors and accordingly calibrate the threshold values and accordingly modify the severity breach thresholds for the set of policies. In one example, during day, a success rate KPI threshold increases, while the success rate KPI threshold may decrease during night. The storage unit [314] is configured to store the modified severity breach thresholds for the set of policies in the database.
[0107] Further, the learning module is an Artificial Intelligence (AI)/Machine Learning (ML) layer [504] which calibrates the severity breach threshold for the one or more identified policies based on geographical conditions, time and other factors.
[0108] Once, the IPM [100a] receives the request for one or more reports comprising the set of counters and KPIs from the user, the cron scheduler [304] runs a cron for the set of counters and KPIs as requested by the user. The cron refers to a time-based task scheduler. The cron scheduler [304] allows the users to schedule tasks at pre-defined intervals of time. The cron may periodically execute the tasks needed to gather the values for the set of counters and KPIs to generate the report. The cron information and its state of execution, like in progress or terminated, is stored in the DDL [100u],
[0109] The Computation Layer [lOOd] serves as the main hub for complex data processing tasks. In essence, the Computation Layer [lOOd] is where all major computation and data processing tasks occur.
[0110] The Distributed File System (DFS) [lOOj] is a critical component of the Integrated Performance Management System [100a] that enables multiple clients to access and interact with data seamlessly. The Distributed File system [lOOj] is designed to manage data files that are partitioned into numerous segments known as chunks.
[0111] Referring to FIG. 6, an exemplary implementation of a signal flow diagram [600] for creating policies, in accordance with exemplary implementations of the present disclosure is shown.
[0112] At Step 1, a user [602] creates the one or more policies at the UI unit [316], In one example, after creation of the one or more policies, the user [602] may select to save the one or more policies.
[0113] At Step 2, the UI unit [316] sends a request via the load balancer [100k] to save the one or more policies at the IPM [100a], [0114] At Step 3, the load balancer [100k] sends the one or more policies to the IPM [100a] for saving.
[0115] Further at Step 4, the IPM [100a] saves the data of the one or more policies at the distributed data lake [100u],
[0116] Further at Step 5, the IPM [100a] forwards the request to the cron scheduler [304] for running a cron for the one or more policies.
[0117] At Step 6, the state of the cron scheduler [304] is stored at the distributed data lake [100u],
[0118] At Step 7, the cron scheduler [304] sends an acknowledgment for starting the cron scheduling for the one or more policies to the IPM [100a],
[0119] Next, at Step 8, the IPM [100a] sends a confirmation of the scheduling of the one or more policies to the UI unit [316],
[0120] At step 9, the UI unit [316] displays an update of the one or more polices being saved successfully, to the user.
[0121] Referring to FIG. 7, an exemplary implementation of a signal flow diagram [700] for counters and KPIs policy management, in accordance with exemplary implementations of the present disclosure is shown.
[0122] As described with respect to FIG. 6, the user creates one or more policies and saves them at the DDL [100u], Later, the user sends a request for one or more reports comprising a set of counters and KPIs which the user wants to analyse and observe. After the report request from the user is received, a request for execution of one or more policies at a pre-defined interval is received at the IPM [100a] from the cron scheduler [304], As shown in FIG. 7, at Step 1, the cron scheduler [304] sends the request for execution of one or more policies comprising the requested set of counters and KPIs to the IPM [100a],
[0123] At Step 2, the request is transmitted to the computation layer [lOOd] for processing if the request is received before the retention period expires for the computation layer [ 1 OOd] . This means that the data related to the one or more policies to be executed is active and is present in the cache, then the data is collected from the computation layer [100d], Furthermore, at Step 3, the IPM [100a] is configured to receive an acknowledgement from the computational layer [100d], The retention period refers to the maximum duration of time for which the computation layer [lOOu] stores the data in its cache. In one example, the retention period may be defined as 10 days. Therefore, the present disclosure utilizes the stored pre-computed KPI data to perform the realtime calculation and output delivery.
[0124] At Step 4, the computation layer [lOOd] sends a request to the distributed file system [lOOj] to access the stored data to execute the one or more policies.
[0125] In response to the request, at Step 5, the distributed file system [lOOj] sends the data based on the request.
[0126] At Step 6, the computation layer [lOOd] performs computations on the data. The computation may identify a set of breach conditions based on the defined set of severity breach thresholds in the one or more policies. The severity breach thresholds are defined for the set of counters and the KPIs.
[0127] At Step 7, the computation layer [lOOd] sends back the KPI data based on the computations to the IPM [100a],
[0128] Further, if the request to execute one or more policies is received after the retention period expires for the computation layer [lOOd], then at Step 8, the IPM [100a] queries the distributed data lake [lOOu] to fetch the required counter data.
[0129] Further, at Step 9, the distributed data lake [lOOu] sends the counter data based on the query to the IPM [100a],
[0130] Next, at Step 10, based on the received KPI data and the fetched counter data, the IPM [100a] computes the final data to generate the report. This computation is to identify a set of breach conditions based on the defined set of severity breach thresholds in the one or more policies. The severity breach thresholds are defined for the set of counters and the KPIs. The final data computed is in the form of a report which the user can use to analyse the KPIs. The set of breach conditions associated with the counters and the KPIs are identified in an event a current value of each of the counter from the set of counters and each of the KPI from the set of KPIs exceeds a corresponding severity breach threshold from the set of severity breach thresholds.
[0131] At Step 11, the IPM [100a] establishes a connection with the mail server [502] to send the report to the user [602],
[0132] At Step 12, the IPM [100a] sends a request to calibrate the set of breach conditions to the AI/ML [504], The breach conditions are calibrated by the AI/ML [504] based on the severity breach thresholds. The calibration is based on a set of factors including but may not be limited to a weather, a holiday and a disaster. As can be understood, these are external factors and therefore may change from time to time. Based on the changing factors, the AI/ML [504] is further configured to modify the severity breach thresholds for the set of policies.
[0133] At Step 13, the AI/ML [504] sends a request to the DDL [lOOu], to save the modified severity breach thresholds in the one or more policies.
[0134] At Step 14, the mail server [502] sends a notification via mail to all users based on the email group information mentioned in the one or more policies.
[0135] Referring to FIG. 8, an exemplary implementation of a signal flow diagram [800] for showing a highlighted result to the user based on user request, in accordance with exemplary implementations of the present disclosure is shown.
[0136] At Step 1, the user [602] sends a request to the UI unit [316] to show the generated report or the result.
[0137] Further at Step 2, the UI unit [316] sends the request to the load balancer [100k] to fetch the generated report.
[0138] Further at Step 3, the load balancer [100k] forwards the request to the IPM [100a] to fetch the report.
[0139] At Step 4, the IPM [100a] fetches the severity threshold based on the request, from the distributed data lake [100u], [0140] Further at Step 5, the IPM [100a] sends the delta KPI data to the load balancer [100k], The present disclosure also provides the delta for a user chosen dates via step 1, which utilizes the stored pre-computed KPI data to perform the real-time calculation and output delivery.
[0141] Further, at Step 6, the load balancer [100k] forwards the data to the UI unit [316],
[0142] At Step 7, the UI unit [316] displays a highlighted report comprising the delta KPI data to the user [602],
[0143] For example, a KPI can be created to assess a success rate of request delivery, based on the counters that collect metrics related to number of requests delivered and number of requests failed to be delivered. For the success rate KPI, the severity breach thresholds may be defined as follows: If Success Rate > 99.5% then “no breach condition”)
If Success Rate >99% and <99.5%, then breach condition is detected with threshold severity defined as “warning”
If Success Rate <99%, then the breach condition is detected with threshold severity defined as “major”
If Success Rate <80%, then the breach condition is detected with threshold severity defined as “critical”.
[0144] In an implementation, if the identified breach condition falls in the severity defined as “critical”, then the report will highlight this in dark red color. Similarly, if the identified breach condition falls in the severity defined as “major”, then the report will highlight this in red color. And, if the identified breach condition falls in the severity defined as “warning”, then the report will highlight this in orange color.
[0145] The present disclosure further discloses a User Equipment (UE). The UE comprises a user interface unit [316], The user interface unit [316] is configured to create, one or more policies comprising a set of counters and a set of KPIs. The UE comprises a transceiver unit to send a request to a load balancer to save the one or more policies. The transceiver unit is further configured to send a request, for fetching a result for the set of counters and the set of KPIs. The transceiver unit is further configured to receive, a report comprising the result for the set of counters and the set of KPIs. The result comprises one or more highlights for one or more breach conditions. The result is generated by a system [300] comprising a transceiver unit [302], configured to transmit, from a cron scheduler [304], a request for execution of the one or more policies at a pre-defined interval to an integrated performance management (IPM) [100a], The transceiver unit [306], configured to receive, at the IPM [100a], a request for the report comprising the set of counters and the set of KPIs. The system [300] comprises an identification unit [308], configured to identify at the IPM [100a], a set of policies from the one or more policies comprising the set of counters and the set of KPIs. The system [300] comprises an evaluation unit [310], configured to evaluate at the IPM [100a], the set of policies comprising the set of counters and the set of KPIs based on a set of severity breach thresholds. The system [300] further comprises the identification unit [308], configured to identify at the IPM [100a], a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on a set of severity breach thresholds. The system [300] further comprises a report generation unit [312], configured to generate at the IPM [100a], one or more reports comprising the set of breach conditions, wherein the breach conditions are calibrated based on the severity breach thresholds. The transceiver unit [306], is configured to send from the IPM [100a], the one or more reports to the user interface unit [316] of the UE based on the set of policies.
[0146] The present disclosure further discloses a non-transitory computer readable storage medium storing instructions for counters and key performance indicator (KPIs) policy management in a network, the instructions include executable code which, when executed by one or more units of a system, cause a transceiver unit [302] to transmit, from a cron scheduler [304], a request for execution of one or more policies at a pre-defined interval to an integrated performance management (IPM) [100a], The instructions when executed by the system further cause the transceiver unit to [306] receive, at the IPM [100a], a request for a report comprising a set of counters and a set of KPIs. The instructions when executed by the system further cause an identification unit [308] to identify at the IPM [100a], a set of policies from the one or more policies comprising the set of counters and the set of KPIs. The instructions when executed by the system further cause an evaluation unit [310] to evaluate at the IPM [100a], the set of policies comprising the set of counters and the set of KPIs based on a set of severity breach thresholds. The instructions when executed by the system further cause the identification unit [308] to identify at the IPM [100a], a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on a set of severity breach thresholds. The instructions when executed by the system further cause a report generation unit [312] to generate at the IPM [100a], one or more reports comprising the set of breach conditions, wherein the breach conditions are calibrated based on the severity breach thresholds. The instructions when executed by the system further cause the transceiver unit [306] to send from the IPM [100a], the one or more reports to one or more users based on the set of policies. [0147] As is evident from the above, the present disclosure provides a technically advanced solution for counters and key performance indicator (KPIs) policy management in a network. The present solution provides a system and a method for providing counter and policy management for creating and scheduling the policies for each KPI individually for regular observation. The present disclosure reduces the grunt work and automate the tasks which need to be performed after having observed any kind of breaches. The present disclosure further provides a solution through which one single policy for a Counter or KPI gets applied across many of the IPM modules like in live monitoring, report generation without extra efforts. The present disclosure devises a tool to calibrate the thresholds of the policies according to the weather, holiday, and disasters to overcome the unforeseen turn of events.
[0148] While considerable emphasis has been placed herein on the disclosed implementations, it will be appreciated that many implementations can be made and that many changes can be made to the implementations without departing from the principles of the present disclosure. These and other changes in the implementations of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
[0149] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure

Claims

We Claim:
1. A method [400] for counters and key performance indicator (KPIs) policy management in a network, the method comprises:
- transmitting, by a transceiver unit [302], from a cron scheduler [304] , a request for execution of one or more policies at a pre-defined interval to an integrated performance management (IPM) [100a];
- receiving, by the transceiver unit [306], at the IPM [100a], a request for a report comprising a set of counters and a set of KPIs;
- identifying, by an identification unit [308], at the IPM [100a], a set of policies from the one or more policies comprising the set of counters and the set of KPIs;
- evaluating, by an evaluation unit [310], at the IPM [100a], the set of policies comprising the set of counters and the set of KPIs based on a set of severity breach thresholds;
- identifying, by the identification unit [308], at the IPM [100a], a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on the set of severity breach thresholds;
- generating, by a report generation unit [312], at the IPM [100a], one or more reports comprising the set of breach conditions, wherein the breach conditions are calibrated based on the severity breach thresholds; and
- sending, by the transceiver unit [306], from the IPM [100a], the one or more reports to one or more users based on the set of policies.
2. The method [400] as claimed in the claim 1, wherein prior to transmitting the request for execution of one or more policies from the cron scheduler [304] to the IPM [100a], the method comprises:
- creating, at a user interface unit [316], the one or more policies, wherein each of the policy from the one or more policies is associated with a data;
- transmitting, by the user interface unit [316] to the IPM [100a], the one or more policies comprising the data;
- storing, by a storage unit [314], at the IPM [100a], the one or more polices in a database; and
- forwarding, by the transceiver unit [306], from the IPM [100a] to the cron scheduler [304], a request to schedule the one or more policies based on the data.
3. The method [400] as claimed in claim 2, wherein the data associated with each of the policy from the one or more policies comprises one or more counters, one or more KPIs, one or more aggregation levels associated with each KPI from the one or more KPIs, a schedule associated with each counter from the one or more counters, a schedule associated with each KPI from the one or more KPIs, one or more severity breach threshold values associated with each of the KPI from the one or more KPIs, one or more severity breach threshold values associated with each of the counter from the one or more counters, one or more notification templates and a user notification group information .
4. The method [400] as claimed in claim 3, wherein the schedule associated with each counter from the one or more counters and the schedule associated with each KPI from the one or more KPIs comprises a time interval type and a time interval size.
5. The method [400] as claimed in claim 3, wherein the one or more severity breach threshold values associated with each of the KPI from the one or more KPIs and the one or more severity breach threshold values associated with each of the counter from the one or more counters is associated with one or more severities.
6. The method [400] as claimed in claim 1 wherein the set of breach conditions associated with the set of counters and the set of KPIs is identified in an event a current value of each of the counter from the set of counters and each of the KPI from the set of KPIs exceeds a corresponding severity breach threshold from the set of severity breach thresholds.
7. The method [400] as claimed in claim 1, further comprises: sending, by the IPM [100a], the set of breach conditions to a learning module [320]; calibrating, by a calibration unit [318], at the learning module [320], the severity breach thresholds associated with the set of breach conditions, wherein the calibration is based on a set of factors comprising at least one of a weather, a holiday and a disaster; modifying, by the calibration unit [318], the severity breach thresholds for the set of policies; and storing, by the storage unit [314], by the learning module [320], the modified severity breach thresholds for the set of policies in the database.
8. The method [400] as claimed in claim 1, wherein post receiving, by the transceiver unit, at the IPM [100a], the request for the report comprising the set of counters and the set of KPIs, the method comprises: running, by an execution unit [322], at the cron scheduler [304], a cron for the set of KPIs and the set of counters.
9. The method [400] as claimed in claim 2, wherein for generating the one or more reports by the report generation unit [312], at the IPM [100a], the severity breach thresholds are fetched from the database.
10. The method [400] as claimed in claim 1, wherein the method further comprises: triggering, by an alert unit [324], one or more alarms based on the set of breach conditions.
11. The method [400] as claimed in claim 1, wherein the one or more reports sent to the one or more users comprises a delta KPI report, wherein the delta relates to the difference in result between the previously sent reports and the generated one or more reports.
12. A system [300] for counters and key performance indicator (KPIs) policy management in a network, the system comprises:
- a transceiver unit [302], configured to transmit, from a cron scheduler [304], a request for execution of one or more policies at a pre-defined interval to an integrated performance management (IPM) [100a];
- the transceiver unit [306], configured to receive, at the IPM [100a], a request for a report comprising a set of counters and a set of KPIs;
- an identification unit [308], configured to identify at the IPM [100a], a set of policies from the one or more policies comprising the set of counters and the set of KPIs;
- an evaluation unit [310], configured to evaluate at the IPM [100a], the set of policies comprising the set of counters and the set of KPIs based on a set of severity breach thresholds;
- the identification unit [308], configured to identify at the IPM [100a], a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on a set of severity breach thresholds; - a report generation unit [312], configured to generate at the IPM [100a], one or more reports comprising the set of breach conditions, wherein the breach conditions are calibrated based on the severity breach thresholds; and
- the transceiver unit [306], configured to send from the IPM [100a], the one or more reports to one or more users based on the set of policies.
13. The system [300] as claimed in the claim 12, wherein prior to transmitting the request for execution of one or more policies from the cron scheduler [304] to the IPM [100a], the system comprises:
- a user interface unit [316], configured to create the one or more policies, wherein each of the policy from the one or more policies is associated with a data;
- the user interface unit [316] configured to transmit to the IPM [100a], the one or more policies comprising a data;
- a storage unit [314], configured to store at the IPM [100a], the one or more polices in a database; and
- the transceiver unit [306], configured to forward from the IPM [100a] to the cron scheduler [304], a request to schedule the one or more policies based on the data.
14. The system [300] as claimed in claim 13, wherein the data associated with each of the policy from the one or more policies comprises one or more counters, one or more KPIs, one or more aggregation levels associated with each KPI from the one or more KPIs, a schedule associated with each counter from the one or more counters, a schedule associated with each KPI from the one or more KPIs, one or more severity breach threshold values associated with each of the KPI from the one or more KPIs, one or more severity breach threshold values associated with each of the counter from the one or more counters and an email group to receive a KPI report.
15. The system [300] as claimed in claim 14, wherein the schedule associated with each counter from the one or more counters and the schedule associated with each KPI from the one or more KPIs comprises a time interval type and a time interval size.
16. The system [300] as claimed in claim 14, wherein the one or more severity breach threshold values associated with each of the KPI from the one or more KPIs and the one or more severity breach threshold values associated with each of the counter from the one or more counters is associated with one or more severities.
17. The system [300] as claimed in claim 12 wherein the set of breach conditions associated with the set of counters and the set of KPIs is identified in an event a current value of each of the counter from the set of counters and each of the KPI from the set of KPIs exceeds a corresponding severity breach threshold from the set of severity breach thresholds.
18. The system [300] as claimed in claim 12, further comprises: sending, by the IPM [100a], the set of breach conditions to a learning module [320]; calibrating, by a calibration unit [318], at the learning module [320], the severity breach thresholds associated with the set of breach conditions, wherein the calibration is based on a set of factors comprising at least one of a weather, a holiday and a disaster; modifying, by the calibration unit [318], the severity breach thresholds for the set of policies; storing, by the storage unit [314], by the learning module [320], the modified severity breach thresholds for the set of policies in the database.
19. The system [300] as claimed in claim 12, wherein post receiving, by the transceiver unit, at the IPM [100a], the request for the report comprising the set of counters and the set of KPIs, the method comprises: running, by an execution unit [322], at the cron scheduler [304], a cron for the set of KPIs and the set of counters.
20. The system [300] as claimed in claim 13, wherein for generating the one or more reports by the report generation unit [312], at the IPM [100a], the severity breach thresholds are fetched from the database.
21. The system [300] as claimed in claim 12, wherein the system further comprises: an alert unit [324], configured to trigger one or more alarms based on the set of breach conditions.
22. The system [300] as claimed in claim 12, wherein the one or more reports sent to the one or more users comprises a delta KPI report, wherein the delta relates to the difference in result between the previously sent reports and the generated one or more reports.
23. A User Equipment (UE) comprising: a user interface unit [316], configured to: create, one or more policies comprising a set of counters and a set of KPIs; a transceiver unit, configured to: send a request to a load balancer to save the one or more policies; send a request, for fetching a result for the set of counters and the set of KPIs; receive, a report comprising the result for the set of counters and the set of KPIs, wherein the result comprises one or more highlights for one or more breach conditions and is generated by a system [300] comprising:
- a transceiver unit, configured to transmit, from a cron scheduler [304] , a request for execution of the one or more policies at a pre-defined interval to an integrated performance management (IPM) [100a];
- the transceiver unit, configured to receive, at the IPM [ 100a] , a request for the report comprising the set of counters and the set of KPIs;
- an identification unit [308], configured to identify at the IPM [100a], a set of policies from the one or more policies comprising the set of counters and the set of KPIs;
- an evaluation unit [310], configured to evaluate at the IPM [100a], the set of policies comprising the set of counters and the set of KPIs based on a set of severity breach thresholds;
- the identification unit [308], configured to identify at the IPM [100a], a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on a set of severity breach thresholds;
- a report generation unit [312], configured to generate at the IPM [100a], one or more reports comprising the set of breach conditions, wherein the breach conditions are calibrated based on the severity breach thresholds; and
- the transceiver unit [306], configured to send from the IPM [100a], the one or more reports to the user interface unit [316] of the UE based on the set of policies.
24. A non-transitory computer-readable storage medium storing instructions for counters and key performance indicator (KPIs) policy management in a network, the storage medium comprising executable code which, when executed by one or more units of a system [300], causes: - a transceiver unit [302], configured to transmit, from a cron scheduler [304], a request for execution of one or more policies at a pre-defined interval to an integrated performance management (IPM) [100a];
- the transceiver unit [306], configured to receive, at the IPM [100a], a request for a report comprising a set of counters and a set of KPIs;
- an identification unit [308], configured to identify at the IPM [100a], a set of policies from the one or more policies comprising the set of counters and the set of KPIs;
- an evaluation unit [310], configured to evaluate at the IPM [100a], the set of policies comprising the set of counters and the set of KPIs based on a set of severity breach thresholds;
- the identification unit [308], configured to identify at the IPM [100a], a set of breach conditions associated with the set of counters and the set of KPIs based on the evaluation on a set of severity breach thresholds;
- a report generation unit [312], configured to generate at the IPM [100a], one or more reports comprising the set of breach conditions, wherein the breach conditions are calibrated based on the severity breach thresholds; and
- the transceiver unit [306], configured to send from the IPM [100a], the one or more reports to one or more users based on the set of policies.
PCT/IN2024/051966 2023-10-04 2024-10-04 Method and system for counters and key performance indicators (kpis) policy management in a network Pending WO2025074407A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202321066604 2023-10-04
IN202321066604 2023-10-04

Publications (1)

Publication Number Publication Date
WO2025074407A1 true WO2025074407A1 (en) 2025-04-10

Family

ID=95284350

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2024/051966 Pending WO2025074407A1 (en) 2023-10-04 2024-10-04 Method and system for counters and key performance indicators (kpis) policy management in a network

Country Status (1)

Country Link
WO (1) WO2025074407A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200236562A1 (en) * 2019-01-18 2020-07-23 Hcl Technologies Limited Node profiling based on a key performance indicator (kpi)
US20220116265A1 (en) * 2020-10-12 2022-04-14 Ribbon Communications Operating Company, Inc. Methods, apparatus and systems for efficient cross-layer network analytics

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200236562A1 (en) * 2019-01-18 2020-07-23 Hcl Technologies Limited Node profiling based on a key performance indicator (kpi)
US20220116265A1 (en) * 2020-10-12 2022-04-14 Ribbon Communications Operating Company, Inc. Methods, apparatus and systems for efficient cross-layer network analytics

Similar Documents

Publication Publication Date Title
US20230039566A1 (en) Automated system and method for detection and remediation of anomalies in robotic process automation environment
US10574529B2 (en) Defining conditional triggers for issuing data center asset information
US9419917B2 (en) System and method of semantically modelling and monitoring applications and software architecture hosted by an IaaS provider
CN111782433B (en) Abnormality detection method, device, electronic device and storage medium
US9798644B2 (en) Monitoring system performance with pattern event detection
US11611496B2 (en) Composite key performance indicators for network health monitoring
KR20250065317A (en) System and method for managing operation in trust reality viewpointing networking infrastructure
WO2025046609A1 (en) METHOD AND SYSTEM FOR ANALYSIS OF KEY PERFORMANCE INDICATORS (KPIs)
WO2025017649A1 (en) Method and system for monitoring performance of network elements
WO2025074407A1 (en) Method and system for counters and key performance indicators (kpis) policy management in a network
CN115514618A (en) Alarm event processing method and device, electronic equipment and medium
WO2025017645A1 (en) Method and system for performing real-time analysis of kpis to monitor performance of network
WO2025027653A1 (en) Method and system for automatically detecting a new network node associated with a network
WO2025041165A1 (en) Method and system to automatically assign restricted data to a user
WO2025017640A1 (en) Method and system for real-time analysis of key performance indicators (kpis) deviations
WO2025017729A1 (en) Method and system for an automatic root cause analysis of an anomaly in a network
WO2025017579A1 (en) Method and system for unified data ingestion in a network performance management system
WO2025017578A1 (en) Method and system of providing a unified data normalizer within a network performance management system
WO2025041158A1 (en) Method and system for provisioning and configuring counters
WO2025022439A1 (en) Method and system for generation of interconneted dashboards
WO2025041159A1 (en) Method and system for generating and provisioning a key performance indicator (kpi)
WO2025017726A1 (en) Method and system for creating a network area
WO2025041164A1 (en) Method and system for dynamically assigning network counters
CN118170457B (en) Operation and maintenance management device, method and medium
WO2025017646A1 (en) Method and system for optimal allocation of resources for executing kpi requests

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24874231

Country of ref document: EP

Kind code of ref document: A1