WO2025017720A1 - System and method for managing a southbound instance in a network - Google Patents
System and method for managing a southbound instance in a network Download PDFInfo
- Publication number
- WO2025017720A1 WO2025017720A1 PCT/IN2024/051295 IN2024051295W WO2025017720A1 WO 2025017720 A1 WO2025017720 A1 WO 2025017720A1 IN 2024051295 W IN2024051295 W IN 2024051295W WO 2025017720 A1 WO2025017720 A1 WO 2025017720A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- southbound
- instances
- requests
- parameters
- instance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/147—Network analysis or design for predicting network behaviour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/16—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
Definitions
- the present invention relates to the field of wireless communication systems, more particularly relates to a method and system for managing a southbound instance in a network.
- FMS Facility Management System
- a method for managing the southbound instance in the network includes the step of transmitting, by one or more processors, one or more requests to one or more southbound instances.
- the method further includes the step of receiving, by the one or more processors, one or more responses from the one or more southbound instances based on the one or more of requests transmitted.
- the method further includes the step of determining, by the one or more processors, one or more parameters pertaining to the one or more responses received from the one or more southbound instances.
- the method further includes the step of predicting, by the one or more processors, utilizing a trained model, one or more performance indicators of the one or more southbound instances based on the determined one or more parameters.
- the one or more requests are transmitted to the one or more southbound instances in a predefined order, the predefined order includes at least one of, a round robin.
- the one or more parameters related to each of the one or more southbound instances includes at least one of, a response time of each response received from the one or more southbound instances, and a request handling capacity of each of the one or more southbound instances.
- step of predicting, utilizing a trained model, one or more performance indicators of the one or more southbound instances based on the determined one or more parameters includes the steps of, receiving, by the one or more processors, current data pertaining to one or more parameters from each of the one or more southbound instances, comparing, by the one or more processors, utilizing the trained model, the current data pertaining to the one or more parameters with a preset threshold range pertaining to the one or more parameters of each of the one or more of southbound instances, in response to determining, by the or more processors, the current data is within the preset threshold range for the one or more southbound instances, predicting, by the one or more processors, the one or more southbound instances being independent of one or more performance indicators and in response to determining, by the one or more processors, the current data is not within the preset threshold range for the one or more southbound instances, predicting, by the one or more processors, the one or more southbound instances includes one or more performance indicators.
- the model is trained with historical data pertaining to one or more parameters for each of the one or more southbound instances.
- the model is at least one of, an Artificial Intelligence/Machine Learning (AI/ML) model.
- AI/ML Artificial Intelligence/Machine Learning
- the one or more processors stops transmitting requests to the one or more southbound instances which includes one or more performance indicators in order to prevent failures and to maintain throughput.
- the one or more performance indicators include at least one of, one or more abnormalities such as delay in transmitting responses from the one or more southbound instances, and requests handling capacity pertaining to inability of the one or more southbound instances in handling an increased number of requests.
- a system for managing the southbound instance in the network includes a transceiver configured to transmit, one or more requests to one or more southbound instances and receive, one or more responses from one or more of the one or more southbound instances based on the one or more of requests transmitted.
- the system further includes a determination unit, configured to determine, one or more parameters pertaining to the one or more responses received from the one or more southbound instances.
- the system further includes a prediction unit, configured to, predict, utilizing a trained model, one or more performance indicators of the one or more southbound instances based on the determined one or more parameters.
- a non -transitory computer- readable medium having stored thereon computer-readable instructions that, when executed by a processor.
- the processor is configured to transmit one or more requests to one or more southbound instances.
- the processor is further configured to receive one or more responses from one or more of the one or more southbound instances based on the one or more of requests transmitted.
- the processor is further configured to determine one or more parameters pertaining to the one or more responses received from the one or more southbound instances.
- the processor is further configured to predict, utilizing a trained model, one or more performance indicators of the one or more southbound instances based on the determined one or more parameters.
- FIG. 1 is an exemplary block diagram of an environment for managing a southbound instance in a network, according to one or more embodiments of the present invention
- FIG. 2 is an exemplary block diagram of a system for managing a southbound instance in a network, according to one or more embodiments of the present invention
- FIG. 3 is an exemplary block diagram of architecture for managing a southbound instance, according to one or more embodiments of the present invention.
- FIG. 4 is an exemplary signal flow diagram illustrating the flow for managing a southbound instance in a network, according to one or more embodiments of the present disclosure.
- FIG. 5 is a flow diagram of a method for managing a southbound instance in a network, according to one or more embodiments of the present invention.
- the present disclosure describes managing a southbound instance in a network.
- the southbound instance refers to a southbound interface instance.
- These interfaces are represented by SBI.
- the invention utilizes a trained model for prediction of one or more performance indicators of one or more southbound interface instances.
- the inventive step of the system lies in the intelligent management of load pertaining to a plurality of requests across the multiple southbound interface instances based on one or more parameters and the one or more performance indicators which ensures that the system sends requests to a least occupied or most responsive southbound interface instance, thereby optimizing the throughput of the southbound interface instances.
- FIG. 1 illustrates an exemplary block diagram of an environment 100 for managing a southbound instance in a network, according to one or more embodiments of the present invention.
- the environment 100 includes, a User Equipment (UE) 102, a server 104, a network 106, and a system 108, and one or more south bound instances 110.
- the UE 102 aids a user to interact with the system 108 for managing the southbound instance in the network 106.
- the user includes, at least one of, a network operator.
- UEs user equipment
- Each of the at least one UE 102 namely the first UE 102a, the second UE 102b, and the third UE 102c is configured to connect to the server 104 via the network 106.
- each of the first UE 102a, the second UE 102b, and the third UE 102c is one of, but not limited to, any electrical, electronic, electromechanical or an equipment and a combination of one or more of the above devices such as a smartphone Virtual Reality (VR) devices, Augmented Reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
- VR Virtual Reality
- AR Augmented Reality
- the network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
- PSTN Public-Switched Telephone Network
- the network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
- the network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth.
- the network 106 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public- Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a V OIP or some combination thereof.
- PSTN Public- Switched Telephone Network
- the environment 100 includes one or more southbound instances 110.
- the one or more southbound instances 110 is the southbound interface instances.
- the southbound interface instances enable a specific component to communicate with a lower level component.
- a system 108 communicating with one or more network nodes via the southbound interface instances.
- the one or more southbound instances 110 is used for establishing communication between the system 108 and one or more network nodes which are responsible for handling one or more requests from the system 108.
- the one or more southbound instances 110 acts as the medium between the system 108 and the one or more network nodes.
- the one or more southbound instances 110 receives the one or more requests from the system 108 and further transmits the one or more requests to the one or more network nodes to serve the one or more requests.
- the environment 100 includes the server 104 accessible via the network 106.
- the server 104 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, a processor executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof.
- the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
- the environment 100 further includes the system 108 communicably coupled to the server 104, one or more south bound instances 110 and the UE 102 via the network 106.
- the system 108 is adapted to be embedded within the server 104 or is embedded as the individual entity.
- the system is a Facility Management System (FMS) which interacts with one or more southbound instances 110.
- FMS Facility Management System
- FIG. 2 is an exemplary block diagram of the system 108 for managing a southbound instance in the network 106, according to one or more embodiments of the present invention.
- the system 108 manages the southbound instance in the network 106, the system 108 includes one or more processors 202, a memory 204, and a storage unit 206.
- the one or more processors 202 includes a transceiver 208, a determination unit 210, a prediction unit 212, and a trained model 214.
- the one or more processors 202 hereinafter referred to as the processor 202, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
- the system 108 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
- the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204.
- the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204 as the memory 204 is communicably connected to the processor 202.
- the memory 204 is configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed for managing the one or more southbound instances 110 in the network 106.
- the memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
- the storage unit 206 is configured to store data pertaining to the one or more southbound instances 110.
- the storage unit 206 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object- oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth.
- NoSQL No-Structured Query Language
- the foregoing examples of storage unit 206 types are non-limiting and may not be mutually exclusive e.g., the database can be both commercial and cloud-based, or both relational and open-source, etc.
- the transceiver 208 of the processor 202 is configured to transmit, one or more requests to one or more southbound instances 110.
- the one or more requests are transmitted by the transceiver 208 to the one or more southbound instances 110 in a predefined order such as at least one of, but not limited to, a round robin manner.
- the one or more requests are for initiating communication between the processor 202 and the one or more southbound instances 110.
- the transceiver 208 of the processor 202 is further configured to receive one or more responses from the one or more southbound instances 110 based on the one or more requests transmitted.
- the determination unit 210 of the processor 202 is configured to determine, one or more parameters pertaining to the one or more responses received from the one or more southbound instances 110.
- the one or more parameters are related to each of the one or more southbound instances 110.
- the one or more parameters includes at least one of, but not limited to, a response time of each response received from the one or more southbound instances 110, and a request handling capacity of each of the one or more southbound instances 110.
- the prediction unit 212 of the processor is configured to predict utilizing a trained model 214, one or more performance indicators of the one or more southbound instances 110 based on the determined one or more parameters.
- the one or more performance indicators includes at least one of, but not limited to, one or more abnormalities such as delay in transmitting responses from the one or more southbound instances 110 and requests handling capacity pertaining to inability of the one or more southbound instances 110 in handling an increased number of requests.
- the prediction unit 212 predicts the one or more performance indicators of the one or more southbound instances 110 based on the determined one or more parameters by comparing current data received from each of the one or more southbound instances 110 pertaining to the one or more parameters with a preset threshold range pertaining to the one or more parameters of each of the one or more of southbound instances 110 utilizing the trained model 214. Further, based on the comparison, the prediction unit 212 determines whether the current data is within the preset threshold range. If the current data is not within the preset threshold range, the prediction unit 212 predicts that the one or more southbound instances 110 includes one or more performance indicators. Based on the prediction of the one or more performance indicators included in the one or more southbound instances 110, the transceiver 208 stops transmitting the one or more requests to the one or more southbound instances 110 which includes one or more performance indicators in order to prevent failures.
- the trained model 214 is at least one of, but not limited to, an Artificial Intelligence/Machine Learning (AI/ML) model.
- AI/ML Artificial Intelligence/Machine Learning
- the trained model 214 is trained with the historical data pertaining to one or more parameters for each of the one or more southbound instances 110.
- the trained model 214 learns trends, patterns, and behavior of the one or more southbound instances 110.
- the historical data is used to analyze past performance of the one or more southbound instances 110.
- the trained model 214 is configured to analyze the trends and patterns over time pertaining to the one or more parameters of the one or more southbound instances 110 such as variation in the response time or request handling capacity, which aids in understanding the long-term behavior of the one or more southbound instances 110.
- the determination unit 210 includes one or more logics in order to analyze/determine the one or more parameters of the one or more southbound instances 110 and utilizing the trained model 214, the prediction unit 212 predicts the one or more performance indicators of the one or more southbound instances 110.
- the determination unit 210, trained model 214 and the prediction unit 212 are communicably coupled to each other and can be used in combination or interchangeably.
- the transceiver 208, the determination unit 210, the prediction unit 212, and the trained model 214 in an exemplary embodiment, are implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202.
- the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions.
- the memory 204 may store instructions that, when executed by the processing resource, implement the processor 202.
- the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource.
- the processor 202 may be implemented by electronic circuitry.
- FIG. 3 illustrates an exemplary block diagram of an architecture for managing a southbound instance, according to one or more embodiments of the present invention. More specifically, FIG. 3 illustrates the system 108 for managing one or more southbound instances 110. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to multiple southbound instances for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
- FIG. 3 shows communication between a Northbound Interface (NBI) 310, the system 108, and the one or more southbound instances 110.
- NBI Northbound Interface
- the NBI 310, and the one or more southbound instances 110 uses a network protocol connection to communicate with the system 108.
- the network protocol connection is the establishment and management of communication between the NBI 310, the system 108, and the one or more southbound instances 110, over the network 106 (as shown in FIG. 1) using a specific protocol or set of protocols.
- the network protocol connection includes, but not limited to, Session Initiation Protocol (SIP), System Information Block (SIB) protocol, Transmission Control Protocol (TCP), User Datagram Protocol (UDP), File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP), Simple Network Management Protocol (SNMP), Internet Control Message Protocol (ICMP), Hypertext Transfer Protocol Secure (HTTPS) and Terminal Network (TELNET).
- SIP Session Initiation Protocol
- SIB System Information Block
- TCP Transmission Control Protocol
- UDP User Datagram Protocol
- FTP File Transfer Protocol
- HTTP Hypertext Transfer Protocol
- SNMP Simple Network Management Protocol
- ICMP Internet Control Message Protocol
- HTTPS Hypertext Transfer Protocol Secure
- TELNET Terminal Network
- a request is received at the processor 202 of the system 108 from the NBI 310.
- the NBI 310 is an interface that allows a component to communicate with a higher-level component.
- the one or more network nodes transmits the one or more requests to the system via the NBI 310.
- the transceiver 208 transmits the one or more requests to one or more southbound instances based on the received one or more requests from the NBI 310.
- transceiver 208 transmits the one or more requests to the SBI Instance 1, the SBI Instance 2 and the SBI Instance 3.
- the transceiver 208 transmits the one or more requests to one or more network nodes via the at least one of, the SBI Instance 1, the SBI Instance 2 and the SBI Instance 3.
- the transceiver 208 receives one or more responses from the at least one of, the SBI Instance 1, the SBI Instance 2 and the SBI Instance 3.
- the one or more network nodes transmits the one or more responses via the at least one of, the SBI Instance 1, the SBI Instance 2 and the SBI Instance 3 subsequent to serving the one or more requests.
- the one or more parameters such as the response time/requests handling capacity of each of the SBI Instance 1, the SBI Instance 2 and the SBI Instance 3 are determined by the determination unit 210 of the processor. Further, the prediction unit 212 utilizes the trained model 214 to predict one or more performance indicators of the each of the SBI Instance 1, the SBI Instance 2 and the SBI Instance 3 based on the response time/ requests handling capacity of the each of the SBI Instance 1, the SBI Instance 2 and the SBI Instance 3.
- the prediction unit 212 determines that the SBI Instance 3 among the one or more southbound instances 110 is underperforming or not meeting its expected capacity as the response time of the SBI Instance 3 is more as compared to the SBI Instance 1, the SBI Instance 2, and a predefined response time. Based on the prediction that the SBI Instance 3 is not functioning properly, the transceiver 208 stops transmitting one or more requests to the SBI Instance 3 in order to prevent failures and to maintain throughput.
- the invention prevents complete order/request failures and enhances the reliability of the one or more southbound instances 110.
- FIG. 4 is an exemplary signal flow diagram illustrating the flow for managing a southbound instance in the network 106, according to one or more embodiments of the present disclosure.
- the NBI 310 transmits the request to the processor 202 of the system 108.
- the processor 202 of the system 108 receives the request from the NBI 310 and further transmits one or more requests to the one or more southbound instances 110 such as the SBI Instance 1 and the SBI instance 2.
- the processor 202 receives one or more responses from the one or more southbound instances 110 such as the SBI Instance 1 and the SBI instance 2.
- the processor 202 of the system 108 transmits the one or more requests to a southbound instance 110 such as the SBI Instance 1, subsequent to predicting, utilizing the trained model, the underperformance of the SBI Instance 2 based on determining the one or more parameters of each of the one or more southbound instances 110 such as the SBI Instance 1 and the SBI instance 2.
- the processor 202 prevents the complete order/request failures and enhances the reliability of the one or more southbound interfaces 110 by ensuring efficient utilization of responsive southbound instance such as the SBI Instance 1.
- FIG. 5 is a flow diagram of a method 500 for predicting anomalies in the network 106, according to one or more embodiments of the present invention.
- the method 500 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
- the method 500 includes the step of transmitting one or more requests to one or more southbound instances 110.
- transceiver 208 of the processor 202 is configured to transmit the one or more requests to one or more southbound instances 110.
- the one or more requests are distributed or transmitted among the one or more southbound instances 110 in the predefined order such as round robin manner.
- the one or more requests are distributed in round robin manner such as from SBI Instance 1, the SBI Instance 2, and the SBI Instance 3, then again SBI Instance 1, the SBI Instance 2, and so on.
- the method 500 includes the step of receiving the one or more responses from the one or more southbound instances based on the one or more of requests transmitted.
- transceiver 208 of the processor 202 is configured to receive the one or more responses from the one or more southbound instances 110 based on the one or more of requests transmitted. For example, subsequent to serving the one or more requests by the one or network nodes, the one or more southbound instances 110 transmits one or more responses to the processor 202.
- the method 500 includes the step of receiving determining, one or more parameters pertaining to the one or more responses received from the one or more southbound instances 110.
- the determination unit 210 of the processor 202 is configured to determine one or more parameters pertaining to the one or more responses received from the one or more southbound instances 110.
- the determination unit 210 determines the response time of the each of the one or more southbound instances 110. For example, the response time determined by the determination unit 210 for the SBI Instance 1 is 1 sec, the response time determined for the SBI Instance 2 is 1 sec, and the response time determined for the SBI Instance 3 is 5 sec.
- the determination unit 210 determines the request handling capacity for each of the one or more southbound instances 110. For example, the requests handled by the SBI Instance 1 is 1000 requests, requests handled by the SBI Instance 2 is 990 requests, and the requests handled by the SBI Instance 3 is 600 requests.
- the method 500 includes the step of predicting, utilizing a trained model 214, one or more performance indicators of the one or more southbound instances 110 based on the determined one or more parameters.
- the prediction unit 212 of the processor 202 is configured to predict utilizing the trained model 214, one or more performance indicators of the one or more southbound instances 110 based on the determined one or more parameters by receiving current data pertaining to one or more parameters from each of the one or more southbound instances.
- the prediction unit 212 receives the response time for SBI Instance 1, the SBI Instance 2, and the SBI Instance 3.
- the prediction unit 212 receives the data related to requests handled by SBI Instance 1, the SBI Instance 2, and the SBI Instance 3.
- the trained model 214 is trained with the historical data pertaining to one or more parameters for each of the one or more southbound instances 110. For example, while training the trained model 214 learns the trends/patterns related to the one or more parameters for each of the southbound instances 110. Based on training, the trained model 214 presets a threshold range for the one or more parameters of the each of the southbound instances 110. For example, let us assume
- the preset threshold range related to the response time for the SBI Instance 1, the SBI Instance 2, and the SBI Instance 3 is 1 to 2 seconds.
- the preset threshold range related to the requests handling capacity for the SBI Instance 1 , the SBI Instance 2, and the SBI Instance 3 is 900 to 1000 requests.
- the prediction unit 212 compares, utilizing the trained model 214, the current data pertaining to the one or more parameters with the preset threshold range pertaining to the one or more parameters of each of the one or more of southbound instances 110. For example, the prediction unit 212 compares, utilizing the trained model 214, the response time of the SBI Instance 1 i.e. 1 sec, the response time of the SBI Instance 2 i.e. 1 sec, and the response time of the SBI Instance 3 i.e. 5 sec with the preset threshold range related to the response time of the one or more of southbound instances 110 i.e. 1 to 2 seconds.
- the prediction unit 212 compares, utilizing the trained model 214, the requests handled by the SBI Instance 1 i.e. 1000 requests, the requests handled by the SBI Instance 2 i.e. 990 requests, and the requests handled by the SBI Instance 3 i.e. 600 requests with the preset threshold range related to the request handling capacity of the one or more of southbound instances 110 i.e., 900 to 1000 requests.
- the prediction unit 212 predicts the one or more southbound instances being independent of one or more performance indicators.
- the one or more performance indicators include at least one of, but not limited to, one or more abnormalities such as delay in transmitting responses from the one or more southbound instances 110, and requests handling capacity pertaining to inability of the one or more southbound instances 110 in handling an increased number of requests.
- the prediction unit 212 predicts that the one or more southbound instances 110 are performing as expected and does not include any one or more abnormalities. For example, the SBI Instance 1, and the SBI Instance 2 are performing well as the response time and the request handling capacity of SBI Instance 1, and the SBI Instance 2 are within the preset threshold range. Therefore, the SBI Instance 1, and the SBI Instance 2 are inferred as the responsive one or more southbound instances 110.
- the prediction unit 212 in response to determining by the prediction unit 212, the current data is not within the preset threshold range for the one or more southbound instances 110, the prediction unit 212 predicts the one or more southbound instances includes one or more performance indicators.
- the prediction unit 212 predicts that the one or more southbound instances 110 are underperforming and includes one or more abnormalities. For example, the SBI Instance 3 is underperforming as the response time and the request handling capacity of SBI Instance 3 is not within the preset threshold range. Therefore, the SBI Instance 3 is inferred as the underperforming southbound instance 110.
- the transceiver 208 stops transmitting one or more requests to the one or more southbound instances 110 which includes one or more performance indicators in order to prevent failures and to maintain throughput. For example, transceiver 208 stops or halts transmitting one or more requests to the SBI Instance 3.
- the processor 202 prevents complete order/requests failures and enhances the reliability of one or more southbound instances 110.
- the processor 202 distributes load pertaining to the one or more requests among the responsive one or more southbound instances 110 such as the SBI Instance 1 and the SBI Instance 2.
- the processor 202 distributes load pertaining to the one or more requests among the one or more southbound instances 110 which are least occupied.
- the processor 202 balances the workload among the responsive one or more southbound instances 110. Due to the distribution of the load pertaining to the one or more requests among the one or more responsive southbound instances 110, the throughput of the one or more southbound instances 110 is optimized.
- the present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions.
- the computer- readable instructions are executed by the processor 202.
- the processor 202 is configured to transmit one or more requests to one or more southbound instances 110.
- the processor 202 is further configured to receive one or more responses from the one or more southbound instances 110 based on the one or more requests transmitted.
- the processor 202 is further configured to determine one or more parameters pertaining to the one or more responses received from the one or more southbound instances 110.
- the processor 202 is further configured to predict, utilizing a trained model 214, one or more performance indicators of the one or more southbound instances 110 based on the determined one or more parameters.
- the present disclosure provides technical advancement such as smart load management by enabling intelligent load distribution pertaining to the one or more requests across the one or more southbound instances based on one or more parameters which leads to optimized performance and balancing the workload. Further, the failure prevention is done by identifying and halting/stopping transmitting the one or more requests to the underperforming southbound instance which enhances the reliability of the southbound interfaces. By reducing wait times for the one or more requests and ensuring efficient utilization of the responsive southbound instances, the invention increases the overall system throughput, improving the efficiency of the system.
- the present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features.
- the listed advantages are to be read in a non-limiting manner.
- UE User Equipment
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The present invention relates to a system (108) and a method (500) for managing a southbound instance in a network (106). The method (500) includes steps of, transmitting one or more requests to one or more southbound instances (110). The method (500) further includes steps of receiving one or more responses from the one or more southbound instances (110) based on the one or more of requests transmitted. The method (500) further includes steps of determining, one or more parameters pertaining to the one or more responses received from the one or more southbound instances (110). The method (500) includes steps of, predicting utilizing a trained model (214), one or more performance indicators of the one or more southbound instances (110) based on the determined one or more parameters.
Description
SYSTEM AND METHOD FOR MANAGING A SOUTHBOUND INSTANCE IN
A NETWORK
FIELD OF THE INVENTION
[0001] The present invention relates to the field of wireless communication systems, more particularly relates to a method and system for managing a southbound instance in a network.
BACKGROUND OF THE INVENTION
[0002] In general, in various network architectures, there are often multiple network nodes that interact with various one or more southbound interfaces. These southbound interfaces may correspond to different instances or components within a network node. To distribute the workload evenly, a request distribution mechanism, such as roundrobin scheduling, is commonly employed. However, this approach does not take into account a response time and performance characteristics of the individual instances.
[0003] The problem arises when one or more instances of the network node experience delays or fail to respond in a timely manner. This can be due to performance issues or other activities occurring on the server hosting the instance. As a result, the entire system throughput is negatively impacted, as the requesting entity, such as a Facility Management System (FMS), has to wait for responses from non- responsive southbound instances.
[0004] Therefore, there is a need for a solution that solves the aforementioned problem.
SUMMARY OF THE INVENTION
[0005] One or more embodiments of the present disclosure provide a method and system managing a southbound instance in a network.
[0006] In one aspect of the present invention, a method for managing the southbound instance in the network is disclosed. The method includes the step of transmitting, by one or more processors, one or more requests to one or more southbound instances. The method further includes the step of receiving, by the one or more processors, one or more responses from the one or more southbound instances based on the one or more of requests transmitted. The method further includes the step of determining, by the one or more processors, one or more parameters pertaining to the one or more responses received from the one or more southbound instances. The method further includes the step of predicting, by the one or more processors, utilizing a trained model, one or more performance indicators of the one or more southbound instances based on the determined one or more parameters.
[0007] In one embodiment, the one or more requests are transmitted to the one or more southbound instances in a predefined order, the predefined order includes at least one of, a round robin.
[0008] In another embodiment, the one or more parameters related to each of the one or more southbound instances includes at least one of, a response time of each response received from the one or more southbound instances, and a request handling capacity of each of the one or more southbound instances.
[0009] In yet another embodiment, step of predicting, utilizing a trained model, one or more performance indicators of the one or more southbound instances based on the determined one or more parameters, includes the steps of, receiving, by the one or more processors, current data pertaining to one or more parameters from each of the one or more southbound instances, comparing, by the one or more processors, utilizing the trained model, the current data pertaining to the one or more parameters with a preset threshold range pertaining to the one or more parameters of each of the one or more of southbound instances, in response to determining, by the or more processors, the current data is within the preset threshold range for the one or more southbound instances, predicting, by the one or more processors, the one or more southbound
instances being independent of one or more performance indicators and in response to determining, by the one or more processors, the current data is not within the preset threshold range for the one or more southbound instances, predicting, by the one or more processors, the one or more southbound instances includes one or more performance indicators.
[0010] In yet another embodiment, the model is trained with historical data pertaining to one or more parameters for each of the one or more southbound instances.
[0011] In yet another embodiment, the model is at least one of, an Artificial Intelligence/Machine Learning (AI/ML) model.
[0012] In yet another embodiment, the one or more processors stops transmitting requests to the one or more southbound instances which includes one or more performance indicators in order to prevent failures and to maintain throughput.
[0013] In yet another embodiment, the one or more performance indicators include at least one of, one or more abnormalities such as delay in transmitting responses from the one or more southbound instances, and requests handling capacity pertaining to inability of the one or more southbound instances in handling an increased number of requests.
[0014] In another aspect of the present invention, a system for managing the southbound instance in the network is disclosed. The system includes a transceiver configured to transmit, one or more requests to one or more southbound instances and receive, one or more responses from one or more of the one or more southbound instances based on the one or more of requests transmitted. The system further includes a determination unit, configured to determine, one or more parameters pertaining to the one or more responses received from the one or more southbound instances. The system further includes a prediction unit, configured to, predict, utilizing a trained model, one or more performance indicators of the one or more southbound instances based on the determined one or more parameters.
[0015] In yet another aspect of the present invention, a non -transitory computer- readable medium having stored thereon computer-readable instructions that, when executed by a processor. The processor is configured to transmit one or more requests to one or more southbound instances. The processor is further configured to receive one or more responses from one or more of the one or more southbound instances based on the one or more of requests transmitted. The processor is further configured to determine one or more parameters pertaining to the one or more responses received from the one or more southbound instances. The processor is further configured to predict, utilizing a trained model, one or more performance indicators of the one or more southbound instances based on the determined one or more parameters.
[0016] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all- inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical
components, electronic components or circuitry commonly used to implement such components.
[0018] FIG. 1 is an exemplary block diagram of an environment for managing a southbound instance in a network, according to one or more embodiments of the present invention;
[0019] FIG. 2 is an exemplary block diagram of a system for managing a southbound instance in a network, according to one or more embodiments of the present invention;
[0020] FIG. 3 is an exemplary block diagram of architecture for managing a southbound instance, according to one or more embodiments of the present invention;
[0021] FIG. 4 is an exemplary signal flow diagram illustrating the flow for managing a southbound instance in a network, according to one or more embodiments of the present disclosure; and
[0022] FIG. 5 is a flow diagram of a method for managing a southbound instance in a network, according to one or more embodiments of the present invention.
[0023] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0024] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0025] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the
present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0026] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0027] The present disclosure describes managing a southbound instance in a network. Herein the southbound instance refers to a southbound interface instance. These interfaces are represented by SBI. The invention utilizes a trained model for prediction of one or more performance indicators of one or more southbound interface instances. The inventive step of the system lies in the intelligent management of load pertaining to a plurality of requests across the multiple southbound interface instances based on one or more parameters and the one or more performance indicators which ensures that the system sends requests to a least occupied or most responsive southbound interface instance, thereby optimizing the throughput of the southbound interface instances.
[0028] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 for managing a southbound instance in a network, according to one or more embodiments of the present invention. The environment 100 includes, a User
Equipment (UE) 102, a server 104, a network 106, and a system 108, and one or more south bound instances 110. The UE 102 aids a user to interact with the system 108 for managing the southbound instance in the network 106. In an embodiment, the user includes, at least one of, a network operator.
[0029] For the purpose of description and explanation, the description will be explained with respect to one or more user equipment’s (UEs) 102, or to be more specific will be explained with respect to a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the at least one UE 102 namely the first UE 102a, the second UE 102b, and the third UE 102c is configured to connect to the server 104 via the network 106.
[0030] In an embodiment, each of the first UE 102a, the second UE 102b, and the third UE 102c is one of, but not limited to, any electrical, electronic, electromechanical or an equipment and a combination of one or more of the above devices such as a smartphone Virtual Reality (VR) devices, Augmented Reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0031] The network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0032] The network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 106 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public- Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a V OIP or some combination thereof.
[0033] The environment 100 includes one or more southbound instances 110. The one or more southbound instances 110 is the southbound interface instances. The southbound interface instances enable a specific component to communicate with a lower level component. For example, a system 108 communicating with one or more network nodes via the southbound interface instances. The one or more southbound instances 110 is used for establishing communication between the system 108 and one or more network nodes which are responsible for handling one or more requests from the system 108. The one or more southbound instances 110 acts as the medium between the system 108 and the one or more network nodes. The one or more southbound instances 110 receives the one or more requests from the system 108 and further transmits the one or more requests to the one or more network nodes to serve the one or more requests.
[0034] The environment 100 includes the server 104 accessible via the network 106. The server 104 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, a processor executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity
may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0035] The environment 100 further includes the system 108 communicably coupled to the server 104, one or more south bound instances 110 and the UE 102 via the network 106. The system 108 is adapted to be embedded within the server 104 or is embedded as the individual entity. In one embodiment, the system is a Facility Management System (FMS) which interacts with one or more southbound instances 110.
[0036] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0037] FIG. 2 is an exemplary block diagram of the system 108 for managing a southbound instance in the network 106, according to one or more embodiments of the present invention.
[0038] As per the illustrated and preferred embodiment, the system 108 manages the southbound instance in the network 106, the system 108 includes one or more processors 202, a memory 204, and a storage unit 206. The one or more processors 202 includes a transceiver 208, a determination unit 210, a prediction unit 212, and a trained model 214. The one or more processors 202, hereinafter referred to as the processor 202, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. However, it is to be noted that the system 108 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure. Among other capabilities, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204.
[0039] As per the illustrated embodiment, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204 as the memory 204 is communicably connected to the processor 202. The memory 204 is configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed for managing the one or more southbound instances 110 in the network 106. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0040] As per the illustrated embodiment, the storage unit 206 is configured to store data pertaining to the one or more southbound instances 110. The storage unit 206 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object- oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of storage unit 206 types are non-limiting and may not be mutually exclusive e.g., the database can be both commercial and cloud-based, or both relational and open-source, etc.
[0041] In an embodiment, the transceiver 208 of the processor 202 is configured to transmit, one or more requests to one or more southbound instances 110. The one or more requests are transmitted by the transceiver 208 to the one or more southbound instances 110 in a predefined order such as at least one of, but not limited to, a round robin manner. In particular, the one or more requests are for initiating communication between the processor 202 and the one or more southbound instances 110. The transceiver 208 of the processor 202 is further configured to receive one or more responses from the one or more southbound instances 110 based on the one or more requests transmitted.
[0042] In an embodiment, upon reception of the one or more responses from the one or more southbound instances 110, the determination unit 210 of the processor 202 is configured to determine, one or more parameters pertaining to the one or more responses received from the one or more southbound instances 110. In particular, the one or more parameters are related to each of the one or more southbound instances 110. The one or more parameters includes at least one of, but not limited to, a response time of each response received from the one or more southbound instances 110, and a request handling capacity of each of the one or more southbound instances 110.
[0043] Subsequent to determining one or more parameters pertaining to the response received from the one or more southbound instances 110, the prediction unit 212 of the processor is configured to predict utilizing a trained model 214, one or more performance indicators of the one or more southbound instances 110 based on the determined one or more parameters. The one or more performance indicators includes at least one of, but not limited to, one or more abnormalities such as delay in transmitting responses from the one or more southbound instances 110 and requests handling capacity pertaining to inability of the one or more southbound instances 110 in handling an increased number of requests.
[0044] In one embodiment, the prediction unit 212 predicts the one or more performance indicators of the one or more southbound instances 110 based on the determined one or more parameters by comparing current data received from each of the one or more southbound instances 110 pertaining to the one or more parameters with a preset threshold range pertaining to the one or more parameters of each of the one or more of southbound instances 110 utilizing the trained model 214. Further, based on the comparison, the prediction unit 212 determines whether the current data is within the preset threshold range. If the current data is not within the preset threshold range, the prediction unit 212 predicts that the one or more southbound instances 110 includes one or more performance indicators. Based on the prediction of the one or more performance indicators included in the one or more southbound instances 110, the transceiver 208 stops transmitting the one or more requests to the one or more
southbound instances 110 which includes one or more performance indicators in order to prevent failures.
[0045] In an embodiment, the trained model 214 is at least one of, but not limited to, an Artificial Intelligence/Machine Learning (AI/ML) model.
[0046] The trained model 214 is trained with the historical data pertaining to one or more parameters for each of the one or more southbound instances 110. In particular, the trained model 214 learns trends, patterns, and behavior of the one or more southbound instances 110. The historical data is used to analyze past performance of the one or more southbound instances 110. The trained model 214 is configured to analyze the trends and patterns over time pertaining to the one or more parameters of the one or more southbound instances 110 such as variation in the response time or request handling capacity, which aids in understanding the long-term behavior of the one or more southbound instances 110.
[0047] In one embodiment, the determination unit 210, includes one or more logics in order to analyze/determine the one or more parameters of the one or more southbound instances 110 and utilizing the trained model 214, the prediction unit 212 predicts the one or more performance indicators of the one or more southbound instances 110. In another embodiment, the determination unit 210, trained model 214 and the prediction unit 212 are communicably coupled to each other and can be used in combination or interchangeably.
[0048] The transceiver 208, the determination unit 210, the prediction unit 212, and the trained model 214 in an exemplary embodiment, are implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage
medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor 202. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0049] FIG. 3 illustrates an exemplary block diagram of an architecture for managing a southbound instance, according to one or more embodiments of the present invention. More specifically, FIG. 3 illustrates the system 108 for managing one or more southbound instances 110. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to multiple southbound instances for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0050] FIG. 3 shows communication between a Northbound Interface (NBI) 310, the system 108, and the one or more southbound instances 110. For the purpose of description of the exemplary embodiment as illustrated in FIG. 3, the NBI 310, and the one or more southbound instances 110 uses a network protocol connection to communicate with the system 108. In an embodiment, the network protocol connection is the establishment and management of communication between the NBI 310, the system 108, and the one or more southbound instances 110, over the network 106 (as shown in FIG. 1) using a specific protocol or set of protocols. The network protocol connection includes, but not limited to, Session Initiation Protocol (SIP), System Information Block (SIB) protocol, Transmission Control Protocol (TCP), User Datagram Protocol (UDP), File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP), Simple Network Management Protocol (SNMP), Internet Control Message Protocol (ICMP), Hypertext Transfer Protocol Secure (HTTPS) and Terminal Network (TELNET).
[0051] For example, let us assume, the one or more southbound instances 110 includes a Southbound Interface (SBI) Instance 1, a SBI Instance 2 and a SBI Instance 3. In alternate embodiments, the one or more southbound instances 110 may include SBI Instances as per the requirement of the network 106.
[0052] Initially, a request is received at the processor 202 of the system 108 from the NBI 310. The NBI 310 is an interface that allows a component to communicate with a higher-level component. For example, the one or more network nodes transmits the one or more requests to the system via the NBI 310.
[0053] Further, the transceiver 208 transmits the one or more requests to one or more southbound instances based on the received one or more requests from the NBI 310. In particular, transceiver 208 transmits the one or more requests to the SBI Instance 1, the SBI Instance 2 and the SBI Instance 3. In an alternate embodiment, the transceiver 208 transmits the one or more requests to one or more network nodes via the at least one of, the SBI Instance 1, the SBI Instance 2 and the SBI Instance 3.
[0054] Thereafter, the transceiver 208 receives one or more responses from the at least one of, the SBI Instance 1, the SBI Instance 2 and the SBI Instance 3. In particular, the one or more network nodes transmits the one or more responses via the at least one of, the SBI Instance 1, the SBI Instance 2 and the SBI Instance 3 subsequent to serving the one or more requests.
[0055] Upon receiving the one or more responses from the at least one of, the SBI Instance 1, the SBI Instance 2 and the SBI Instance 3, the one or more parameters such as the response time/requests handling capacity of each of the SBI Instance 1, the SBI Instance 2 and the SBI Instance 3 are determined by the determination unit 210 of the processor. Further, the prediction unit 212 utilizes the trained model 214 to predict one or more performance indicators of the each of the SBI Instance 1, the SBI Instance 2 and the SBI Instance 3 based on the response time/ requests handling capacity of the each of the SBI Instance 1, the SBI Instance 2 and the SBI Instance 3.
[0056] Furthermore, the prediction unit 212 determines that the SBI Instance 3 among the one or more southbound instances 110 is underperforming or not meeting its expected capacity as the response time of the SBI Instance 3 is more as compared to the SBI Instance 1, the SBI Instance 2, and a predefined response time. Based on the prediction that the SBI Instance 3 is not functioning properly, the transceiver 208 stops transmitting one or more requests to the SBI Instance 3 in order to prevent failures and to maintain throughput. Advantageously, by identifying the underperforming SBI instance, the invention prevents complete order/request failures and enhances the reliability of the one or more southbound instances 110.
[0057] FIG. 4 is an exemplary signal flow diagram illustrating the flow for managing a southbound instance in the network 106, according to one or more embodiments of the present disclosure.
[0058] At step 402, the NBI 310 transmits the request to the processor 202 of the system 108.
[0059] At step 404, the processor 202 of the system 108 receives the request from the NBI 310 and further transmits one or more requests to the one or more southbound instances 110 such as the SBI Instance 1 and the SBI instance 2.
[0060] At step 406, based on the one or more requests transmitted, the processor 202 receives one or more responses from the one or more southbound instances 110 such as the SBI Instance 1 and the SBI instance 2.
[0061] At step 408, the processor 202 of the system 108 transmits the one or more requests to a southbound instance 110 such as the SBI Instance 1, subsequent to predicting, utilizing the trained model, the underperformance of the SBI Instance 2 based on determining the one or more parameters of each of the one or more southbound instances 110 such as the SBI Instance 1 and the SBI instance 2. Advantageously, the processor 202 prevents the complete order/request failures and
enhances the reliability of the one or more southbound interfaces 110 by ensuring efficient utilization of responsive southbound instance such as the SBI Instance 1.
[0062] FIG. 5 is a flow diagram of a method 500 for predicting anomalies in the network 106, according to one or more embodiments of the present invention. For the purpose of description, the method 500 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0063] At step 502, the method 500 includes the step of transmitting one or more requests to one or more southbound instances 110. In one embodiment, transceiver 208 of the processor 202 is configured to transmit the one or more requests to one or more southbound instances 110. In particular, the one or more requests are distributed or transmitted among the one or more southbound instances 110 in the predefined order such as round robin manner. For example, the one or more requests are distributed in round robin manner such as from SBI Instance 1, the SBI Instance 2, and the SBI Instance 3, then again SBI Instance 1, the SBI Instance 2, and so on.
[0064] At step 504, the method 500 includes the step of receiving the one or more responses from the one or more southbound instances based on the one or more of requests transmitted. In one embodiment, transceiver 208 of the processor 202 is configured to receive the one or more responses from the one or more southbound instances 110 based on the one or more of requests transmitted. For example, subsequent to serving the one or more requests by the one or network nodes, the one or more southbound instances 110 transmits one or more responses to the processor 202.
[0065] At step 506, the method 500 includes the step of receiving determining, one or more parameters pertaining to the one or more responses received from the one or more southbound instances 110. In one embodiment, the determination unit 210 of the processor 202 is configured to determine one or more parameters pertaining to the
one or more responses received from the one or more southbound instances 110. In particular, the determination unit 210 determines the response time of the each of the one or more southbound instances 110. For example, the response time determined by the determination unit 210 for the SBI Instance 1 is 1 sec, the response time determined for the SBI Instance 2 is 1 sec, and the response time determined for the SBI Instance 3 is 5 sec.
[0066] In an alternate embodiment, the determination unit 210 determines the request handling capacity for each of the one or more southbound instances 110. For example, the requests handled by the SBI Instance 1 is 1000 requests, requests handled by the SBI Instance 2 is 990 requests, and the requests handled by the SBI Instance 3 is 600 requests.
[0067] At step 508, the method 500 includes the step of predicting, utilizing a trained model 214, one or more performance indicators of the one or more southbound instances 110 based on the determined one or more parameters. In one embodiment, the prediction unit 212 of the processor 202 is configured to predict utilizing the trained model 214, one or more performance indicators of the one or more southbound instances 110 based on the determined one or more parameters by receiving current data pertaining to one or more parameters from each of the one or more southbound instances. In particular, the prediction unit 212 receives the response time for SBI Instance 1, the SBI Instance 2, and the SBI Instance 3. In alternate embodiment, the prediction unit 212 receives the data related to requests handled by SBI Instance 1, the SBI Instance 2, and the SBI Instance 3.
[0068] In particular, the trained model 214 is trained with the historical data pertaining to one or more parameters for each of the one or more southbound instances 110. For example, while training the trained model 214 learns the trends/patterns related to the one or more parameters for each of the southbound instances 110. Based on training, the trained model 214 presets a threshold range for the one or more parameters of the each of the southbound instances 110. For example, let us assume
Y1
the preset threshold range related to the response time for the SBI Instance 1, the SBI Instance 2, and the SBI Instance 3 is 1 to 2 seconds. In alternate embodiment, the preset threshold range related to the requests handling capacity for the SBI Instance 1 , the SBI Instance 2, and the SBI Instance 3 is 900 to 1000 requests.
[0069] Further, the prediction unit 212 compares, utilizing the trained model 214, the current data pertaining to the one or more parameters with the preset threshold range pertaining to the one or more parameters of each of the one or more of southbound instances 110. For example, the prediction unit 212 compares, utilizing the trained model 214, the response time of the SBI Instance 1 i.e. 1 sec, the response time of the SBI Instance 2 i.e. 1 sec, and the response time of the SBI Instance 3 i.e. 5 sec with the preset threshold range related to the response time of the one or more of southbound instances 110 i.e. 1 to 2 seconds. In alternate embodiment, for example, the prediction unit 212 compares, utilizing the trained model 214, the requests handled by the SBI Instance 1 i.e. 1000 requests, the requests handled by the SBI Instance 2 i.e. 990 requests, and the requests handled by the SBI Instance 3 i.e. 600 requests with the preset threshold range related to the request handling capacity of the one or more of southbound instances 110 i.e., 900 to 1000 requests.
[0070] Thereafter, in response to determining by the prediction unit 212, the current data is within the preset threshold range for the one or more southbound instances 110, the prediction unit 212 predicts the one or more southbound instances being independent of one or more performance indicators. The one or more performance indicators include at least one of, but not limited to, one or more abnormalities such as delay in transmitting responses from the one or more southbound instances 110, and requests handling capacity pertaining to inability of the one or more southbound instances 110 in handling an increased number of requests.
[0071] In particular, based on comparison, when the response time and the request handling capacity of the one or more southbound instances 110 is within the preset threshold range, the prediction unit 212 predicts that the one or more southbound
instances 110 are performing as expected and does not include any one or more abnormalities. For example, the SBI Instance 1, and the SBI Instance 2 are performing well as the response time and the request handling capacity of SBI Instance 1, and the SBI Instance 2 are within the preset threshold range. Therefore, the SBI Instance 1, and the SBI Instance 2 are inferred as the responsive one or more southbound instances 110.
[0072] In an alternate embodiment, in response to determining by the prediction unit 212, the current data is not within the preset threshold range for the one or more southbound instances 110, the prediction unit 212 predicts the one or more southbound instances includes one or more performance indicators.
[0073] In particular, based on comparison, when the response time and the request handling capacity of the one or more southbound instances 110 is not within the preset threshold range, the prediction unit 212 predicts that the one or more southbound instances 110 are underperforming and includes one or more abnormalities. For example, the SBI Instance 3 is underperforming as the response time and the request handling capacity of SBI Instance 3 is not within the preset threshold range. Therefore, the SBI Instance 3 is inferred as the underperforming southbound instance 110.
[0074] Furthermore, the transceiver 208 stops transmitting one or more requests to the one or more southbound instances 110 which includes one or more performance indicators in order to prevent failures and to maintain throughput. For example, transceiver 208 stops or halts transmitting one or more requests to the SBI Instance 3. Advantageously, by identifying and halting requests to underperforming southbound instances 110, the processor 202 prevents complete order/requests failures and enhances the reliability of one or more southbound instances 110.
[0075] Thereafter, the processor 202 distributes load pertaining to the one or more requests among the responsive one or more southbound instances 110 such as the SBI Instance 1 and the SBI Instance 2. In alternate embodiment, the processor 202
distributes load pertaining to the one or more requests among the one or more southbound instances 110 which are least occupied. Advantageously, the processor 202 balances the workload among the responsive one or more southbound instances 110. Due to the distribution of the load pertaining to the one or more requests among the one or more responsive southbound instances 110, the throughput of the one or more southbound instances 110 is optimized.
[0076] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer- readable instructions are executed by the processor 202. The processor 202 is configured to transmit one or more requests to one or more southbound instances 110. The processor 202 is further configured to receive one or more responses from the one or more southbound instances 110 based on the one or more requests transmitted. The processor 202 is further configured to determine one or more parameters pertaining to the one or more responses received from the one or more southbound instances 110. The processor 202 is further configured to predict, utilizing a trained model 214, one or more performance indicators of the one or more southbound instances 110 based on the determined one or more parameters.
[0077] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0078] The present disclosure provides technical advancement such as smart load management by enabling intelligent load distribution pertaining to the one or more requests across the one or more southbound instances based on one or more parameters which leads to optimized performance and balancing the workload. Further, the failure prevention is done by identifying and halting/stopping transmitting the one or more requests to the underperforming southbound instance which enhances the reliability of the southbound interfaces. By reducing wait times for the one or more requests and ensuring efficient utilization of the responsive southbound instances, the invention increases the overall system throughput, improving the efficiency of the system.
[0079] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0080] Environment - 100;
[0081] User Equipment (UE) - 102;
[0082] Server - 104;
[0083] Network- 106;
[0084] System -108;
[0085] One or more southbound instances - 110;
[0086] Processor - 202;
[0087] Memory - 204;
[0088] Storage unit - 206;
[0089] Transceiver - 208;
[0090] Determination unit - 210
[0091] Prediction unit - 212;
[0092] Trained Model - 214;
[0093] NBI - 310.
Claims
1. A method (500) of managing a southbound instance in a network (106), the method (500) comprising the steps of: transmitting, by one or more processors (202), one or more requests to one or more southbound instances (110); receiving, by the one or more processors (202), one or more responses from the one or more southbound instances (110) based on the one or more of requests transmitted; determining, by the one or more processors (202), one or more parameters pertaining to the one or more responses received from the one or more southbound instances (110); and predicting, by the one or more processors (202), utilizing a trained model (214), one or more performance indicators of the one or more southbound instances (110) based on the determined one or more parameters.
2. The method (500) as claimed in claim 1 , wherein the one or more requests are transmitted to the one or more southbound instances (110) in a predefined order, wherein the predefined order includes at least one of, a round robin.
3. The method (500) as claimed in claim 1, wherein the one or more parameters related to each of the one or more southbound instances (110) includes at least one of, a response time of each response received from the one or more southbound instances (110), and a request handling capacity of each of the one or more southbound instances (110).
4. The method (500) as claimed in claim 1, wherein the step of predicting, utilizing a trained model (214), one or more performance indicators of the one or more southbound instances (110) based on the determined one or more parameters, includes the steps of:
receiving, by the one or more processors (202), current data pertaining to one or more parameters from each of the one or more southbound instances (110); comparing, by the one or more processors (202), utilizing the trained model (214), the current data pertaining to the one or more parameters with a preset threshold range pertaining to the one or more parameters of each of the one or more of southbound instances (110); in response to determining, by the or more processors (202), the current data is within the preset threshold range for the one or more southbound instances (110), predicting, by the one or more processors (202), the one or more southbound instances (110) being independent of one or more performance indicators; and in response to determining, by the one or more processors (202), the current data is not within the preset threshold range for the one or more southbound instances (110), predicting, by the one or more processors (202), the one or more southbound instances (110) includes one or more performance indicators.
5. The method (500) as claimed in claim 1, wherein the model (214) is trained with historical data pertaining to one or more parameters for each of the one or more southbound instances (110).
6. The method (500) as claimed in claim 1, wherein the model (214) is at least one of, an Artificial Intelligence/Machine Learning (AI/ML) model.
7. The method (500) as claimed in claim 1, wherein the one or more processors (202) stops transmitting requests to the one or more southbound instances (110) which includes one or more performance indicators in order to prevent failures and to maintain throughput.
8. The method (500) as claimed in claim 1, wherein the one or more performance indicators include at least one of, one or more abnormalities such as delay in transmitting responses from the one or more southbound instances (110), and
requests handling capacity pertaining to inability of the one or more southbound instances (110) in handling an increased number of requests.
9. A system (108) of managing a southbound instance in a network (106), the system (108) comprising: a transceiver (208), configured to: transmit, one or more requests to one or more southbound instances (110); and receive, one or more responses from the one or more southbound instances (110) based on the one or more of requests transmitted; a determination unit (210), configured to, determine, one or more parameters pertaining to the one or more responses received from the one or more southbound instances (110); and a prediction unit (212), configured to, predict, utilizing a trained model (214), one or more performance indicators of the one or more southbound instances (110) based on the determined one or more parameters.
10. The system (108) as claimed in claim 9, wherein the one or more requests are transmitted to the one or more southbound instances (110) in a predefined order, wherein the predefined order includes at least one of, a round robin.
11. The system (108) as claimed in claim 9, wherein the one or more parameters related to each of the one or more southbound instances (110) includes at least one of, a response time of each response received from the one or more southbound instances (110), and a request handling capacity of each of the one or more southbound instances (110).
12. The system (108) as claimed in claim 9, wherein the prediction unit (212) predicts, utilizing a trained model (214), one or more performance indicators of
the one or more southbound instances (110) based on the determined one or more parameters, by: receiving, current data pertaining to one or more parameters from each of the one or more southbound instances (110); comparing, utilizing the trained model, the current data pertaining to the one or more parameters with a preset threshold range pertaining to the one or more parameters of each of the one or more southbound instances (110); in response to determining, the current data is within the preset threshold range for the one or more southbound instances (110), predicting, the one or more southbound instances (110) being independent of one or more performance indicators; and in response to determining, the current data is not within the preset threshold range for the one or more southbound instances (110), predicting, the one or more southbound instances (110) includes one or more performance indicators.
13. The system (108) as claimed in claim 9, wherein the model is trained with historical data pertaining to one or more parameters for each of the one or more southbound instances (110).
14. The system (108) as claimed in claim 9, wherein the model (214) is at least one of, an Artificial Intelligence/Machine Learning (AI/ML) model.
15. The system (108) as claimed in claim 9, wherein the transceiver (208) stops transmitting requests to the one or more southbound instances (110) which includes one or more performance indicators in order to prevent failures and to maintain throughput.
16. The system (108) as claimed in claim 9, wherein the one or more performance indicators include at least one of, one or more abnormalities such as delay in transmitting responses from the one or more southbound instances (110), and
requests handling capacity pertaining to inability of the one or more southbound instances (110) in handling an increased number of requests.
17. A non-transitory computer-readable medium having stored thereon computer- readable instructions that, when executed by a processor (202), causes the processor (202) to: transmit, one or more requests to one or more southbound instances (110); receive, one or more responses from the one or more southbound instances (110) based on the one or more of requests transmitted; determine, one or more parameters pertaining to the one or more responses received from the one or more southbound instances (110); and predict, utilizing a trained model (214), one or more performance indicators of the one or more southbound instances (110) based on the determined one or more parameters.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN202321048731 | 2023-07-19 | ||
IN202321048731 | 2023-07-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2025017720A1 true WO2025017720A1 (en) | 2025-01-23 |
Family
ID=94281266
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IN2024/051295 Pending WO2025017720A1 (en) | 2023-07-19 | 2024-07-18 | System and method for managing a southbound instance in a network |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2025017720A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022095523A1 (en) * | 2020-11-03 | 2022-05-12 | 华为技术有限公司 | Method, apparatus and system for managing machine learning model |
CN115701303A (en) * | 2020-04-17 | 2023-02-07 | 瑞典爱立信有限公司 | Method and system for offline modeling for quality of service prediction of connected vehicles |
-
2024
- 2024-07-18 WO PCT/IN2024/051295 patent/WO2025017720A1/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115701303A (en) * | 2020-04-17 | 2023-02-07 | 瑞典爱立信有限公司 | Method and system for offline modeling for quality of service prediction of connected vehicles |
WO2022095523A1 (en) * | 2020-11-03 | 2022-05-12 | 华为技术有限公司 | Method, apparatus and system for managing machine learning model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10505818B1 (en) | Methods for analyzing and load balancing based on server health and devices thereof | |
US10048996B1 (en) | Predicting infrastructure failures in a data center for hosted service mitigation actions | |
Han et al. | Evaluating blockchains for IoT | |
US9930111B2 (en) | Techniques for web server management | |
US10476949B2 (en) | Predictive autoscaling in computing systems | |
US9705977B2 (en) | Load balancing for network devices | |
US12423162B2 (en) | Anomaly detection using forecasting computational workloads | |
SE1251125A1 (en) | Method, node and computer program to enable automatic adaptation of resource units | |
CN111901421B (en) | A data processing method and related equipment | |
Fekade et al. | Clustering hypervisors to minimize failures in mobile cloud computing | |
US9460399B1 (en) | Dynamic event driven storage system optimization | |
US20230195591A1 (en) | Time series analysis for forecasting computational workloads | |
Wang et al. | Deepscaling: Autoscaling microservices with stable cpu utilization for large scale production cloud systems | |
De Melo et al. | Redundant vod streaming service in a private cloud: Availability modeling and sensitivity analysis | |
WO2025052481A1 (en) | Method and system for application programming interface (api) traffic management | |
WO2025017720A1 (en) | System and method for managing a southbound instance in a network | |
US12423155B2 (en) | Multi-layer forecasting of computational workloads | |
CN112433891A (en) | Data processing method and device and server | |
WO2025017662A1 (en) | Method and system for failure management in a network | |
WO2025017715A1 (en) | System and method for detecting anomalies in a communication network | |
CN119814792A (en) | A website traffic management method and system based on load balancer | |
WO2025052482A1 (en) | Method and system for managing application programming interface (api) traffic in a communication network | |
WO2025052452A1 (en) | Method and system for management of load distribution in a network | |
Gülcü et al. | Fault masking as a service | |
WO2025074430A1 (en) | Method and system of management of network functions (nfs) in a network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24842711 Country of ref document: EP Kind code of ref document: A1 |