WO2025057244A1 - System and method of managing one or more application programming interface (api) requests in network - Google Patents
System and method of managing one or more application programming interface (api) requests in network Download PDFInfo
- Publication number
- WO2025057244A1 WO2025057244A1 PCT/IN2024/051761 IN2024051761W WO2025057244A1 WO 2025057244 A1 WO2025057244 A1 WO 2025057244A1 IN 2024051761 W IN2024051761 W IN 2024051761W WO 2025057244 A1 WO2025057244 A1 WO 2025057244A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- api
- provider
- providers
- requests
- processors
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/02—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/445—Program loading or initiating
- G06F9/44505—Configuring for program initiating, e.g. using registry, configuration files
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/547—Remote procedure calls [RPC]; Web services
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/501—Performance criteria
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5011—Pool
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/503—Resource availability
Definitions
- One or more embodiments of the present disclosure provide a system and a method of managing one or more Application Programming Interface (API) requests in a network.
- API Application Programming Interface
- the method of managing one or more the API requests in a network includes retrieving, by one or more processors, information pertaining to one or more API calls and one or more API provider’s performance from an API call log corresponding to each of the one or more API providers. Further, the method includes determining, by the one or more processors, an API provider condition of each of the one or more APIs by analysing the retrieved information. Further, the method includes updating, by the one or more processors, the API provider condition of each of the one or more APIs in a configuration file of each of the one or more API providers. Further, the method includes receiving, by the one or more processors, the one or more API requests via at least one User Equipment (UE).
- UE User Equipment
- the method includes parsing, by the one or more processors, utilizing an Artificial Intelligence/Machine Learning (AI/ML) model, the configuration file corresponding to each of the one or more API providers upon receipt of the one or more API requests. Further, the method includes selecting, by the one or more processors, utilizing the AI/ML model, the API provider of the one or more API providers based on the updated API provider condition provided in the configuration file. Further, the method includes transmitting, by the one or more processors, the one or more API requests to the selected API provider.
- AI/ML Artificial Intelligence/Machine Learning
- the information includes response time of each of the one or more API providers, latency, availability of each of the one or more API providers, and reliability of each of the one or more API providers.
- the API provider condition corresponds to at least one of a performance efficiency, a response turnaround time, a present user load of each of the one or more API providers.
- the method on selection of the API provider, includes updating, by the one or more processors, details pertaining to the selected API provider in a log file.
- the log is utilized for training the AI/ML model.
- the method includes comparing, by the one or more processors, the API provider condition corresponding to each of the API provider. Further, the method includes selecting, by the one or more processors, the API provider with a required performance efficiency, required response turnaround time, and a required user load based on the comparison.
- the system for managing one or more API requests in a network includes a retrieving unit, a determining unit, an updating unit, a receiving unit, a parsing unit, a selecting unit, and a transmitting unit.
- the retrieving unit is configured to retrieve information pertaining to one or more API calls and performance of one or more API providers from an API call log corresponding to each of the one or more API providers.
- the determining unit is configured to determine an API provider condition of each of the one or more APIs by analysing the retrieved information.
- the updating unit is configured to update the API provider condition of each of the one or more APIs in a configuration file of each of the one or more API providers.
- the receiving unit is configured to receive the one or more API requests via at least one UE.
- the parsing unit is configured to parse utilizing an Artificial Intelligence/Machine Learning (AI/ML) model, the configuration file corresponding to each of the one or more API providers upon receipt of the one or more API requests.
- the selecting unit is configured to select, utilizing the AI/ML model, the API provider of the one or more API providers based on the updated API provider condition provided in the configuration file.
- the transmitting unit transmits the one or more API requests to the selected API provider.
- a non-transitory computer-readable medium having stored thereon computer-readable instructions causes the processor to retrieve information pertaining to one or more API calls and one or more API providers performance from an API call log corresponding to each of the one or more API providers. Further, the processor determines an API provider condition of each of the one or more API providers by analysing the collected information. Further, the processor updates the API provider condition of each of the one or more APIs in a configuration file of each of the one or more API providers. Further, the processor receives the one or more API requests via at least one User Equipment (UE). Further, the processor parses the configuration file corresponding to each of the one or more API providers upon receipt of the one or more API requests. Further, the processor selects an API provider of the one or more API providers based on the updated API provider condition provided in the configuration file. Further, the processor transmits the one or more API requests to the selected API provider.
- UE User Equipment
- FIG. 1 is an exemplary block diagram of an environment for managing one or more API requests in a network, according to various embodiments of the present disclosure.
- FIG. 2 is a block diagram of a system of FIG. 1, according to various embodiments of the present disclosure.
- FIG. 3 is an example schematic representation of the system of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system.
- FIG. 4 illustrates a system architecture for managing the one or more API requests in the network, in accordance with some embodiments.
- FIG. 5 is a flow diagram illustrating the method for managing one or more API requests in the network, according to various embodiments of the present disclosure.
- FIG. 6 is an example flow diagram illustrating an internal call flow for managing the one or more API requests in the network, in accordance with some embodiments.
- first, second etc. may be used herein to describe various elements, components, regions, layers and/or sections, it should be understood that these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are used only to distinguish one element, component, region, layer or section from another region, layer, or a section. Thus, a first element, component, region layer, or section discussed below could be termed a second element, component, region, layer, or section without departing form the scope of the example embodiments.
- terms such as “processing” or “computing” or “calculating” or “determining” of “displaying” or the like refer to the action and processes of a computer system, or similar electronic computing device/hardware, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
- FIG. 1 illustrates an exemplary block diagram of an environment (100) for managing one or more API requests in a communication network (106), according to various embodiments of the present disclosure.
- the environment (100) comprises a plurality of user equipment’s (UEs) (102-1, 102-2, > ,102-n).
- the at least one UE (102-n) from the plurality of the UEs (102-1, 102-2, > 102-n) is configured to connect to a system (108) via the communication network (106).
- label for the plurality of UEs or one or more UEs is 102.
- the plurality of UEs (102) may be a wireless device or a communication device that may be a part of the system (108).
- the wireless device or the UE (102) may include, but are not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a headmounted display computer device, a head-mounted camera device, a wristwatch, a computer device, and so on), a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication or Voice Over Internet Protocol (VoIP) capabilities.
- VoIP Voice Over Internet Protocol
- the UEs (102) may include, but are not limited to, any electrical, electronic, electro-mechanical or an equipment or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general -purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, where the computing device may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as camera, audio aid, a microphone, a keyboard, input devices for receiving input from a user such as touch pad, touch enabled screen, electronic pen and the like. It may be appreciated that the UEs (102) may not be restricted to the mentioned devices and various other devices may be used. A person skilled in the art will appreciate that the plurality of UEs (102) may include a fixed landline, and a landline with assigned extension within the communication network (106).
- VR virtual reality
- AR augmented reality
- laptop a general -purpose computer
- desktop personal digital assistant
- tablet computer tablet computer
- the communication network (106) may use one or more communication interfaces/protocols such as, for example, Voice Over Internet Protocol (VoIP), 802.11 (Wi-Fi), 802.15 (including BluetoothTM), 802.16 (Wi-Max), 802.22, Cellular standards such as Code Division Multiple Access (CDMA), CDMA2000, Wideband CDMA (WCDMA), Radio Frequency Identification (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
- VoIP Voice Over Internet Protocol
- Wi-Fi Wi-Fi
- 802.15 including BluetoothTM
- Wi-Max Wi-Max
- 802.22 Cellular standards such as Code Division Multiple Access (CDMA), CDMA2000, Wideband CDMA (WCDMA), Radio Frequency Identification (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
- CDMA Code Division Multiple Access
- WCDMA Wideband CDMA
- RFID Radio Frequency Identification
- the communication network (106) includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet- switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
- PSTN Public-Switched Telephone Network
- the communication network (106) may include, but is not limited to, a Third Generation (3G) network, a Fourth Generation (4G) network, a Fifth Generation (5G) network, a Sixth Generation (6G) network, a New Radio (NR) network, a Narrow Band Internet of Things (NB-IoT) network, an Open Radio Access Network (O-RAN), and the like.
- 3G Third Generation
- 4G Fourth Generation
- 5G Fifth Generation
- 6G Sixth Generation
- NR New Radio
- NB-IoT Narrow Band Internet of Things
- OF-RAN Open Radio Access Network
- the communication network (106) may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth.
- the communication network (106) may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a V OIP or some combination thereof.
- PSTN Public-Switched Telephone Network
- One or more network elements can be, for example, but not limited to a base station that is located in the fixed or stationary part of the communication network (106).
- the base station may correspond to a remote radio head, a transmission point, an access point or access node, a macro cell, a small cell, a micro cell, a femto cell, a metro cell.
- the base station enables transmission of radio signals to the UE (102) or a mobile transceiver.
- a radio signal may comply with radio signals as, for example, standardized by a 3 rd Generation Partnership Project (3GPP) or, generally, in line with one or more of the above listed systems.
- 3GPP 3 rd Generation Partnership Project
- a base station may correspond to a NodeB, an eNodeB, a Base Transceiver Station (BTS), an access point, a remote radio head, a transmission point, which may be further divided into a remote unit and a central unit.
- BTS Base Transceiver Station
- the 3GPP specifications cover cellular telecommunications technologies, including radio access, core network, and service capabilities, which provide a complete system description for mobile telecommunications.
- the system (108) is communicatively coupled to a server (104) via the communication network (106).
- the server (104) can be, for example, but not limited to a standalone server, a server blade, a server rack, an application server, a bank of servers, a business telephony application server (BTAS), a server farm, a cloud server, an edge server, home server, a virtualized server, one or more processors executing code to function as a server, or the like.
- the server (104) may operate at various entities or a single entity (include, but is not limited to, a vendor side, a service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, a defense facility side, or any other facility) that provides service.
- entities or a single entity include, but is not limited to, a vendor side, a service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, a defense facility side, or any other facility.
- the environment (100) further includes the system (108) communicably coupled to the server (e.g., remote server or the like) (104) and each UE of the plurality of UEs (102) via the communication network (106).
- the remote server (104) is configured to execute the requests in the communication network (106).
- the system (108) may include an enterprise provisioning server (for example), which may connect with the remote server (104).
- the enterprise provisioning server provides flexibility for enterprises, ecommerce, finance to update/create/delete information related to the requests for the API service in real time as per their business needs.
- a user with administrator rights can access and retrieve the requests for the API service and perform real-time analysis in the system (108).
- the system (108) may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a business telephony application server (BTAS), a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof.
- BTAS business telephony application server
- system (108) may operate at various entities or single entity (for example include, but is not limited to, a vendor side, service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, ecommerce side, finance side, a defense facility side, or any other facility) that provides service.
- entities or single entity for example include, but is not limited to, a vendor side, service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, ecommerce side, finance side, a defense facility side, or any other facility.
- FIG. 2 illustrates a block diagram of the system (108) provided for managing one or more API requests in the communication network (106), according to one or more embodiments of the present invention.
- the system (108) includes the one or more processors (202), the memory (204), an input/output interface unit (206), a display (208), an input device (210), and the database (214). Further the system (108) may comprise one or more processors (202).
- the one or more processors (202), hereinafter referred to as the processor (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
- the system (108) includes one processor. However, it is to be noted that the system (108) may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
- An information related to the request related to the API service may be provided or stored in the memory (204) of the system (108).
- the processor (202) is configured to fetch and execute computer-readable instructions stored in the memory (204).
- the memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service.
- the memory (204) may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
- the memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as Random- Access Memory (RAM), or non-volatile memory such as Electrically Erasable Programmable Read-only Memory (EPROM), flash memory, and the like.
- the system (108) may include an interface(s).
- the interface(s) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output (I/O) devices, storage devices, and the like.
- the interface(s) may facilitate communication for the system.
- the interface(s) may also provide a communication pathway for one or more components of the system. Examples of such components include, but are not limited to, processing unit/engine(s) and the database (214).
- the processing unit/engine(s) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s).
- the information related to the requests related to the API service may further be configured to render on the user interface (206).
- the user interface (206) may include functionality similar to at least a portion of functionality implemented by one or more computer system interfaces such as those described herein and/or generally known to one having ordinary skill in the art.
- the user interface (206) may be rendered on the display (208), implemented using Liquid Crystal Display (LCD) display technology, Organic Light-Emitting Diode (OLED) display technology, and/or other types of conventional display technology.
- the display (208) may be integrated within the system (108) or connected externally.
- the input device(s) (210) may include, but not limited to, keyboard, buttons, scroll wheels, cursors, touchscreen sensors, audio command interfaces, magnetic strip reader, optical scanner, etc.
- the database (214) may be communicably connected to the processor (202) and the memory (204).
- the database (214) may be configured to store and retrieve the request pertaining to features, or services or workflow of the system (108), access rights, attributes, approved list, and authentication data provided by an administrator.
- the database (214) may be outside the system (108) and communicated through a wired medium and a wireless medium.
- the processor (202) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202).
- programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions.
- the memory (204) may store instructions that, when executed by the processing resource, implement the processor (202).
- the processor (202) includes a retrieving unit (216), a determining unit (218), an updating unit (220), a receiving unit, a parsing unit (226), a selecting unit (228), and a transmitting unit (230).
- the retrieving unit (216), the determining unit (218), the updating unit (220), the receiving unit (224), the parsing unit (226), the selecting unit (228), and the transmitting unit (230) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways.
- the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine -readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions.
- the memory (204) may store instructions that, when executed by the processing resource, implement the processor.
- the system (108) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the system (108) and the processing resource.
- the processor (202) may be implemented by the electronic circuitry.
- the retrieving unit (216), the determining unit (218), the updating unit (220), the receiving unit (224), the parsing unit (226), the selecting unit (228), and the transmitting unit (230) are communicably coupled to each other.
- the retrieving unit (216) retrieves information pertaining to one or more API calls and performance of one or more API providers from an API call log corresponding to each of one or more API providers.
- the information can be, for example, but not limited to response time of each of the one or more API providers, latency, availability of each of the one or more API providers, and reliability of each of the one or more API providers.
- the system (108) has a web application that integrates with several third-party APIs (e.g., weather data, payment processing, and user authentication).
- the system (108) logs each API call for monitoring and analysis purposes.
- Each API call log entry contains information such as API provider, timestamp, response, latency, and status code
- the API provider identifies which API was called (e.g., WeatherAPI, PaymentAPI or the like).
- the timestamp indicates when the API call was made.
- the response time indicates how long it took for the API to respond.
- the latency indicates time taken for the request to travel to the API server and back.
- the status code indicates response status from the API (e.g., 200 OK, 500 Internal Server Error or the like).
- the retrieving unit (216) queries the logs to extract performance data for each API provider.
- the retrieving unit (216) retrieves the following metrics from the logs such as response time, latency, availability and reliability
- the determining unit (218) determines an API provider condition of each of the one or more APIs by analysing the retrieved information.
- the API provider condition corresponds to at least one of the performance efficiency, the response turnaround time, and the present user load of each of the one or more API providers.
- the determining unit (218) analyses the performance and status of one or more API providers by evaluating metrics retrieved from API call logs. The determining unit (218) assesses the condition of each API provider based on performance efficiency, response turnaround time, and current user load, among other factors.
- the determining unit (218) evaluates how effectively the API provider performs based on response time and latency. For a WeatherAPI, with an average response time of 120 millisecond (ms) and latency of 90 ms, WeatherAPI is performing efficiently. The WeatherAPI has a required availability and lower error rate compared to PaymentAPI. For the PaymentAPI, although, PaymentAPI has a good availability rate (99.5%) and lower error rate, its response time and required latency are compared to the WeatherAPI. This may indicate less efficiency, particularly under high load.
- the updating unit (220) updates the API provider condition of each of the one or more APIs in a configuration file of each of the one or more API providers.
- the updating unit (220) updates, details pertaining to the selected API
- the log file is utilized for training the AI/ML unit, upon selection of the API provider.
- the receiving unit (224) receives the one or more API requests via the UE (102).
- the parsing unit (226) parses, utilizing the AI/ML model, the configuration file corresponding to each of the one or more API providers upon receipt of the one or more API requests.
- the parsing unit (226) with AI/ML models, automatically processes and interprets configuration files when API requests are received.
- This AI/ML-driven parsing helps in extracting and understanding essential configuration parameters like endpoints, authentication methods, rate limits, and timeouts, enabling efficient and accurate handling of the API requests.
- the use of AI/ML enhances the ability to manage complex configurations and adapt to changes more effectively.
- the AI/ML model processes the configuration file to extract and understand relevant parameters such as named entity recognition, classification and pattern recognition.
- the named entity recognition identifies entities such as endpoints, tokens, and rate limits.
- the classification categorizes the configuration details into predefined classes (e.g., authentication type, rate limits or the like).
- the pattern recognition detects the patterns or anomalies in the configuration settings (e.g., unusually high rate limits).
- the selecting unit (228) selects, utilizing the AI/ML model, the API provider of the one or more API providers based on the updated API provider condition provided in the configuration file. In an embodiment, the selecting unit (228) compares the API provider condition corresponding to each of the API provider. Further, the selecting unit (228) selects the API provider with a required performance efficiency, required response turnaround time, and a required user load based on the comparison.
- the system (108) have a web application that needs to fetch data from a weather API.
- the system (108) considers two API providers, such as WeatherAPI and WeatherServiceX, and the system (108) want to select the one that offers the best performance efficiency, has a lower response turnaround time, and a lighter user load.
- the performance efficiency is 90% (measured by the ratio of successful responses to total requests), an average response turnaround time is 120 ms, and current user load is 500 requests per minute.
- the performance efficiency is 85%, the average response turnaround time is 150 ms and current user load is 300 requests per minute.
- the selecting unit (228) must prioritize required performance efficiency and required response turnaround time while considering user load. Based on the above condition, the WeatherAPI is selected due to its superior performance efficiency and response turnaround time, despite the required user load. The slight advantage in user load for WeatherServiceX is outweighed by the better overall performance of WeatherAPI. Further, the transmitting unit (230) transmits the one or more API requests to the selected API provider.
- FIG. 3 is an example schematic representation of the system (300) of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE (102-1) and the system (108) for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
- the first UE (102-1) includes one or more primary processors (305) communicably coupled to the one or more processors (202) of the system (108).
- the one or more primary processors (305) are coupled with a memory (310) storing instructions which are executed by the one or more primary processors (305). Execution of the stored instructions by the one or more primary processors (305) enables the UE (102-1).
- the execution of the stored instructions by the one or more primary processors (305) further causes the UE (102-1) to transmit, one or more Application Programming Interface (API) requests to the one or more processers (202).
- API Application Programming Interface
- the one or more processors (202) is configured to transmit a response content related to the API call request to the UE (102-1).
- the one or more processors (202) of the system (108) is configured to transmit the response content to at least one of the UE (102-1).
- a kernel (315) is a core component serving as the primary interface between hardware components of the UE (102-1) and the system (108).
- the kernel (315) is configured to provide the plurality of response contents hosted on the system (108) to access resources available in the communication network (106).
- the resources include one of a Central Processing Unit (CPU), memory components such as Random Access Memory (RAM) and Read Only Memory (ROM).
- the system (108) includes the one or more processors (202), the memory (204), the input/output interface unit (206), the display (208), and the input device (210).
- the operations and functions of the one or more processors (202), the memory (204), the input/output interface unit (206), the display (208), and the input device (210) are already explained in FIG. 2.
- the processor (202) includes the retrieving unit (216), the determining unit (218), the updating unit (220), the receiving unit (224), the parsing unit (226), the selecting unit (228), and the transmitting unit (230).
- FIG. 4 illustrates a system architecture (400) for managing one or more API requests in the network (106), in accordance with some embodiments.
- the system architecture (400) comprises a common API gateway (422), an API consumer (402) communicably connected to the common API gateway (422) via the communication network (106), and an API services repository (424) communicably connected to the common API gateway (422) via the network (106).
- the common API gateway (422) may be a part of a subscriber system.
- the common API gateway (422) may be used to expose, secure, and manage backend applications, infrastructure and/or network systems as published APIs.
- the API consumer (420) may communicate with the common API gateway (422) for accessing the published APIs.
- the API services repository (424) may be a part of the common API gateway (422).
- An API orchestration configuration unit (410), an API subscriber state service and eviction rule configuration unit (412), an API synchronization call unit (414), an API response collection unit (416), an API subscriber state service and eviction rule engine (418) and an API asynchronization call unit (420) are included in the API gateway (422).
- the API orchestration configuration unit (410) acts as a control center for managing how APIs work together, ensuring that they function harmoniously to deliver the desired services or results in the network (106).
- the API synchronization call unit (414) is a specialized component designed to manage the coordination and execution of multiple API requests in a synchronized manner, ensuring that they work together effectively within the network (106).
- the API response collection unit (416) is a key component in managing and processing the results from multiple API requests.
- the API response collection unit (416) plays a crucial role in aggregating, parsing, and consolidating responses to ensure that the final output is accurate, complete, and useful within the network (106).
- the API asynchronization call unit (420) is a key component in managing API requests that are handled asynchronously.
- the API asynchronization call unit (420) enables the efficient processing of multiple requests without requiring sequential execution, improves performance and scalability, and enhances the responsiveness of applications within a network environment.
- the API provider load distributor rule configuration unit (412) is used for defining and managing the rules and policies that determine how incoming API requests are distributed across various API providers or endpoints.
- the API provider load distributor rule configuration engine (418) executes and manages the load distribution rules defined by the configuration unit (412).
- the API provider load distributor rule configuration engine (418) is responsible for the actual application of the rules and the real-time handling of the API requests.
- the common API gateway (422) comprises the The API provider load distributer rule configuration unit (412) and the API provider load distributer rule configuration engine (418) configured to run AI/ML based process for automatically managing the data of the API provider.
- the AI/ML based process is implemented for automatically identifying and evicting the data of the API provider from the API services repository (424).
- the common API gateway (422) is a provisioning server hosting an application logic for create/modify/display/delete of subscription information, Authentication information, equipment information.
- the common API gateway (422) supports NETCONF/SSH and Restful/HTTP interfaces.
- the common API gateway (422) supports both client and server-side validation of input parameters for syntax and semantic checks.
- the common API gateway (422) provides lightweight CLI for all provisioning requirements.
- the common API gateway (422) may communicate with the CAPIF (408), an IAM unit (406) and an Edge Load Balancer (ELB) unit (404a, 404b).
- the CAPIF (408) is a complete 3rd Generation Partnership Project (3GPP) API framework that covers functionality related to on-board and off-board API consumers, register and release APIs.
- the IAM unit (406) is used for authentication and authorization of the API consumers (402).
- the ELB units (404a, 404b) automatically distributes incoming application traffic across multiple targets and virtual appliances in one or more availability regions.
- the API configurable orchestration unit (410) allows multiple ways of routing East Bound API calls to multiple Westbound API calls.
- the dynamic transformation and manipulation of API data enables the capability of transforming request as per destination application and also transform response as required by the user (e.g. service provider, or the like).
- the dynamic transformation and manipulation of the API data further performs a body to body transformation and manipulation, a query param transformation and manipulation, and a header transformation and manipulation.
- a template based API provisioning allows the user to create and manage APIs on-demand using the API gateway (422).
- the API configurable orchestration unit (410) improves the agility, flexibility, and cost-efficiency of the API development and management process as the API is integrated dynamically.
- the API provider load distributer rule configuration unit (412) and the API provider load distributer rule configuration engine (418) may be configured to read an API provider load distributer rule engine configuration file. Upon reading the API provider load distributer rule engine configuration file successfully, the API provider load distributer rule configuration unit (412) and the API provider load distributer rule configuration engine (418) may implement the AI/ML based algorithms to take decision on real time request coming from customers. The decision may be based on how to handle the real time request, how to forward each real time request to which API provider. The AI/ML based algorithm may make the decision based on a number of fixed parameters.
- Examples of the number of fixed parameters may include, but not limited to, total APIs request coming to the CAPIF (408), total request coming to the API provider wise, minimum/Maximum/ Average response time by the API provider, standard usage quotas to be consumed over a longer time period (e.g. total subscriptions, total resources such as calls and bandwidth), rate limiting based on subscriptions, APIs, resources, IP, geo-location, bandwidth, request payload (e.g. headers), user/access token, OAuth Token claims, request methods (e.g. GET, POST) and traffic spikes, rate limiting based complex, and extensible and dynamic rules, scenarios and events.
- request methods e.g. GET, POST
- traffic spikes rate limiting based complex
- extensible and dynamic rules, scenarios and events may include, but not limited to, total APIs request coming to the CAPIF (408), total request coming to the API provider wise, minimum/Maximum/ Average response time by the API provider, standard usage quotas to be consumed over
- the AI/ML based algorithm may implement an API provider load distributer.
- the system architecture (400) may analyze the past data of API provider and redirect the inbound request to that the API provider who can serve the best.
- predefined parameters may already be defined in API provider load distributer rule engine Configuration such as: a. Total APIs request coming to the CAPIF (408), b. Total request coming to API Provider wise, c. Minimum/Maximum/ Average Response time by the API provider, d. Standard usage quotas to be consumed over a longer time period (e.g. total subscriptions, total resources such as calls and bandwidth), e.
- Rate limiting based on subscriptions, APIs, resources, IP, geo-location, bandwidth, request payload (e.g. headers), user/access token, OAuth Token claims, request methods (e.g. GET, POST) and traffic spikes, and f. Rate limiting based complex, extensible and dynamic rules, scenarios and events.
- All the parameters and rules may be run time configurable and may be added dynamically as per requirements by using the API provider load distributer rule configuration unit (412) and the API provider load distributer rule configuration engine (418).
- FIG. 5 is a flow diagram (500) illustrating the method for managing one or more API requests in the network (106), according to various embodiments of the present disclosure.
- the method includes retrieving the information pertaining to the one or more API calls and the one or more API provider’s performance from the API call log corresponding to each of the one or more API providers. In an embodiment, the method allows the retrieving unit (216) to retrieve the information pertaining to the one or more API calls and the one or more API provider’s performance from the API call log corresponding to each of the one or more API providers. [0078] At 504, the method includes determining the API provider condition of or corresponding to each of the one or more APIs by analysing the retrieved information. In an embodiment, the method allows the determining unit (218) to determine the API provider condition of each of the one or more API providers by analysing the retrieved information.
- the method includes updating the API provider condition of each of the one or more APIs in the configuration file of each of the one or more API providers.
- the method allows the updating unit (220) to update the API provider condition of each of the one or more APIs in a configuration file of each of the one or more API providers.
- the method includes receiving the one or more API requests via the UE (102).
- the method allows the receiving unit (224) to receive the one or more API requests via the UE (102).
- the method includes transmitting the one or more API requests to the selected API provider.
- the method allows the transmitting unit (230) to transmit the one or more API requests to the selected API provider.
- FIG. 6 is an example flow diagram (600) illustrating an internal call flow for managing one or more API requests in the network (106), in accordance with some embodiments.
- Kernel- 315 [00114] System architecture - 400
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer And Data Communications (AREA)
Abstract
The present disclosure relates to a method of managing API requests in a network (106) by processors (202) The method includes updating the API provider condition of each of the one or more APIs in a configuration file of each of the one or more API providers. Further, the method includes receiving the one or more API requests via at least one UE. Further, the method includes parsing, utilizing an AI/ML model, the configuration file corresponding to each of the one or more API providers upon receipt of the one or more API requests. Further, the method includes selecting utilizing the AI/ML model, the API provider of the one or more API providers based on the updated API provider condition provided in the configuration file. Further, the method includes transmitting the one or more API requests to the selected API provider.
Description
SYSTEM AND METHOD OF MANAGING ONE OR MORE APPLICATION PROGRAMMING INTERFACE (API) REQUESTS IN NETWORK
FIELD OF THE INVENTION
[0001] The present invention relates to the field of networking, more particularly relates, to a method and a system for Artificial Intelligence (AI)/Machine Learning (ML) based identification of edge Application Programming Interface (API) service provider and selection to serve a service call flow.
BACKGROUND OF THE INVENTION
[0002] Integration with multiple systems to meet their requirements has revealed that, in several cases, an inbound traffic is not managed effectively. Due to the deployment of multiple instances of an API provider, and a load from inbound requests by API subscribers, an internal system struggles to handle all requests optimally. Issues include inconsistent response times, latency, availability, and even discrepancies in API provider versions. Some instances exhibit low latency and quick response times, while others experience delays that exceed standard expectations.
[0003] There is a need to overcome the above mentioned drawbacks.
SUMMARY OF THE INVENTION
[0004] One or more embodiments of the present disclosure provide a system and a method of managing one or more Application Programming Interface (API) requests in a network.
[0005] In one aspect of the present invention, the method of managing one or more the API requests in a network is disclosed. The method includes retrieving, by one or more processors, information pertaining to one or more API calls and one or more API provider’s performance from an API call log corresponding to each of the one or more API providers. Further, the method includes determining, by the one or more
processors, an API provider condition of each of the one or more APIs by analysing the retrieved information. Further, the method includes updating, by the one or more processors, the API provider condition of each of the one or more APIs in a configuration file of each of the one or more API providers. Further, the method includes receiving, by the one or more processors, the one or more API requests via at least one User Equipment (UE). Further, the method includes parsing, by the one or more processors, utilizing an Artificial Intelligence/Machine Learning (AI/ML) model, the configuration file corresponding to each of the one or more API providers upon receipt of the one or more API requests. Further, the method includes selecting, by the one or more processors, utilizing the AI/ML model, the API provider of the one or more API providers based on the updated API provider condition provided in the configuration file. Further, the method includes transmitting, by the one or more processors, the one or more API requests to the selected API provider.
[0006] In an embodiment, the information includes response time of each of the one or more API providers, latency, availability of each of the one or more API providers, and reliability of each of the one or more API providers.
[0007] In an embodiment, the API provider condition corresponds to at least one of a performance efficiency, a response turnaround time, a present user load of each of the one or more API providers.
[0008] In an embodiment, on selection of the API provider, the method includes updating, by the one or more processors, details pertaining to the selected API provider in a log file. The log is utilized for training the AI/ML model.
[0009] In an embodiment, on parsing, the method includes comparing, by the one or more processors, the API provider condition corresponding to each of the API provider. Further, the method includes selecting, by the one or more processors, the API provider with a required performance efficiency, required response turnaround time, and a required user load based on the comparison.
[0010] In one aspect of the present invention, the system for managing one or more API requests in a network is disclosed. The system includes a retrieving unit, a determining unit, an updating unit, a receiving unit, a parsing unit, a selecting unit, and a transmitting unit. The retrieving unit is configured to retrieve information pertaining to one or more API calls and performance of one or more API providers from an API call log corresponding to each of the one or more API providers. The determining unit is configured to determine an API provider condition of each of the one or more APIs by analysing the retrieved information. The updating unit is configured to update the API provider condition of each of the one or more APIs in a configuration file of each of the one or more API providers. The receiving unit is configured to receive the one or more API requests via at least one UE. The parsing unit is configured to parse utilizing an Artificial Intelligence/Machine Learning (AI/ML) model, the configuration file corresponding to each of the one or more API providers upon receipt of the one or more API requests. The selecting unit is configured to select, utilizing the AI/ML model, the API provider of the one or more API providers based on the updated API provider condition provided in the configuration file. The transmitting unit transmits the one or more API requests to the selected API provider.
[0011] In one aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions cause the processor to retrieve information pertaining to one or more API calls and one or more API providers performance from an API call log corresponding to each of the one or more API providers. Further, the processor determines an API provider condition of each of the one or more API providers by analysing the collected information. Further, the processor updates the API provider condition of each of the one or more APIs in a configuration file of each of the one or more API providers. Further, the processor receives the one or more API requests via at least one User Equipment (UE). Further, the processor parses the configuration file corresponding to each of the one or more API providers upon receipt of the one or
more API requests. Further, the processor selects an API provider of the one or more API providers based on the updated API provider condition provided in the configuration file. Further, the processor transmits the one or more API requests to the selected API provider.
[0012] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all- inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0014] FIG. 1 is an exemplary block diagram of an environment for managing one or more API requests in a network, according to various embodiments of the present disclosure.
[0015] FIG. 2 is a block diagram of a system of FIG. 1, according to various embodiments of the present disclosure.
[0016] FIG. 3 is an example schematic representation of the system of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system.
[0017] FIG. 4 illustrates a system architecture for managing the one or more API requests in the network, in accordance with some embodiments.
[0018] FIG. 5 is a flow diagram illustrating the method for managing one or more API requests in the network, according to various embodiments of the present disclosure.
[0019] FIG. 6 is an example flow diagram illustrating an internal call flow for managing the one or more API requests in the network, in accordance with some embodiments.
[0020] Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
[0021] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0022] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0023] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0024] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0025] Before discussing example, embodiments in more detail, it is to be noted that the drawings are to be regarded as being schematic representations and elements that are not necessarily shown to scale. Rather, the various elements are represented such that their function and general purpose becomes apparent to a person skilled in
the art. Any connection or coupling between functional blocks, devices, components, or other physical or functional units shown in the drawings or described herein may also be implemented by an indirect connection or coupling. A coupling between components may also be established over a wireless connection. Functional blocks may be implemented in hardware, firmware, software or a combination thereof.
[0026] Further, the flowcharts provided herein, describe the operations as sequential processes. Many of the operations may be performed in parallel, concurrently or simultaneously. In addition, the order of operations maybe re-arranged. The processes may be terminated when their operations are completed, but may also have additional steps not included in the figured. It should be noted, that in some alternative implementations, the functions/acts/ steps noted may occur out of the order noted in the figured. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
[0027] Further, the terms first, second etc... may be used herein to describe various elements, components, regions, layers and/or sections, it should be understood that these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are used only to distinguish one element, component, region, layer or section from another region, layer, or a section. Thus, a first element, component, region layer, or section discussed below could be termed a second element, component, region, layer, or section without departing form the scope of the example embodiments.
[0028] Spatial and functional relationships between elements (for example, between modules) are described using various terms, including “connected,” “engaged,” “interfaced,” and “coupled.” Unless explicitly described as being “direct,” when a relationship between first and second elements is described in the description below, that relationship encompasses a direct relationship where no other intervening elements are present between the first and second elements, and also an indirect
relationship where one or more intervening elements are present (either spatially or functionally) between the first and second elements. In contrast, when an element is referred to as being "directly” connected, engaged, interfaced, or coupled to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., "between," versus "directly between," "adjacent," versus "directly adjacent," etc.).
[0029] The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
[0030] As used herein, the singular forms “a,” “an,” and “the,” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, the terms “and/or” and “at least one of’ include any and all combinations of one or more of the associated listed items. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0031] Unless specifically stated otherwise, or as is apparent from the description, terms such as “processing” or “computing” or “calculating” or “determining” of “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device/hardware, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the
computer system memories or registers or other such information storage, transmission or display devices.
[0032] FIG. 1 illustrates an exemplary block diagram of an environment (100) for managing one or more API requests in a communication network (106), according to various embodiments of the present disclosure. The environment (100) comprises a plurality of user equipment’s (UEs) (102-1, 102-2, > ,102-n). The at least one UE (102-n) from the plurality of the UEs (102-1, 102-2, > 102-n) is configured to connect to a system (108) via the communication network (106). Hereafter, label for the plurality of UEs or one or more UEs is 102.
[0033] In accordance with yet another aspect of the exemplary embodiment, the plurality of UEs (102) may be a wireless device or a communication device that may be a part of the system (108). The wireless device or the UE (102) may include, but are not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a headmounted display computer device, a head-mounted camera device, a wristwatch, a computer device, and so on), a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication or Voice Over Internet Protocol (VoIP) capabilities. In an embodiment, the UEs (102) may include, but are not limited to, any electrical, electronic, electro-mechanical or an equipment or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general -purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, where the computing device may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as camera, audio aid, a microphone, a keyboard, input devices for receiving input from a user such as touch pad, touch enabled screen, electronic pen and the like. It may be appreciated that the UEs (102) may not be restricted to the mentioned devices and various other devices may be used. A person skilled in the art will appreciate that the plurality of UEs (102)
may include a fixed landline, and a landline with assigned extension within the communication network (106).
[0034] The communication network (106), may use one or more communication interfaces/protocols such as, for example, Voice Over Internet Protocol (VoIP), 802.11 (Wi-Fi), 802.15 (including Bluetooth™), 802.16 (Wi-Max), 802.22, Cellular standards such as Code Division Multiple Access (CDMA), CDMA2000, Wideband CDMA (WCDMA), Radio Frequency Identification (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
[0035] The communication network (106) includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet- switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The communication network (106) may include, but is not limited to, a Third Generation (3G) network, a Fourth Generation (4G) network, a Fifth Generation (5G) network, a Sixth Generation (6G) network, a New Radio (NR) network, a Narrow Band Internet of Things (NB-IoT) network, an Open Radio Access Network (O-RAN), and the like.
[0036] The communication network (106) may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The communication network (106) may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable
network, a cellular network, a satellite network, a fiber optic network, a V OIP or some combination thereof.
[0037] One or more network elements can be, for example, but not limited to a base station that is located in the fixed or stationary part of the communication network (106). The base station may correspond to a remote radio head, a transmission point, an access point or access node, a macro cell, a small cell, a micro cell, a femto cell, a metro cell. The base station enables transmission of radio signals to the UE (102) or a mobile transceiver. Such a radio signal may comply with radio signals as, for example, standardized by a 3rd Generation Partnership Project (3GPP) or, generally, in line with one or more of the above listed systems. Thus, a base station may correspond to a NodeB, an eNodeB, a Base Transceiver Station (BTS), an access point, a remote radio head, a transmission point, which may be further divided into a remote unit and a central unit. The 3GPP specifications cover cellular telecommunications technologies, including radio access, core network, and service capabilities, which provide a complete system description for mobile telecommunications.
[0038] The system (108) is communicatively coupled to a server (104) via the communication network (106). The server (104) can be, for example, but not limited to a standalone server, a server blade, a server rack, an application server, a bank of servers, a business telephony application server (BTAS), a server farm, a cloud server, an edge server, home server, a virtualized server, one or more processors executing code to function as a server, or the like. In an implementation, the server (104) may operate at various entities or a single entity (include, but is not limited to, a vendor side, a service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, a defense facility side, or any other facility) that provides service.
[0039] The environment (100) further includes the system (108) communicably coupled to the server (e.g., remote server or the like) (104) and each UE of the plurality
of UEs (102) via the communication network (106). The remote server (104) is configured to execute the requests in the communication network (106).
[0040] The system (108) is adapted to be embedded within the remote server (104) or is embedded as an individual entity. The system (108) is designed to provide a centralized and unified view of data and facilitate efficient business operations. The system (108) is authorized to access to update/create/delete one or more parameters of their relationship between the requests for the API service, which gets reflected in realtime independent of the complexity of network.
[0041] In another embodiment, the system (108) may include an enterprise provisioning server (for example), which may connect with the remote server (104). The enterprise provisioning server provides flexibility for enterprises, ecommerce, finance to update/create/delete information related to the requests for the API service in real time as per their business needs. A user with administrator rights can access and retrieve the requests for the API service and perform real-time analysis in the system (108).
[0042] The system (108) may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a business telephony application server (BTAS), a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an implementation, system (108) may operate at various entities or single entity (for example include, but is not limited to, a vendor side, service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, ecommerce side, finance side, a defense facility side, or any other facility) that provides service.
[0043] However, for the purpose of description, the system (108) is described as an integral part of the remote server (104), without deviating from the scope of the present disclosure. Operational and construction features of the system (108) will be explained in detail with respect to the following figures.
[0044] FIG. 2 illustrates a block diagram of the system (108) provided for managing one or more API requests in the communication network (106), according to one or more embodiments of the present invention. As per the illustrated embodiment, the system (108) includes the one or more processors (202), the memory (204), an input/output interface unit (206), a display (208), an input device (210), and the database (214). Further the system (108) may comprise one or more processors (202). The one or more processors (202), hereinafter referred to as the processor (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system (108) includes one processor. However, it is to be noted that the system (108) may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
[0045] An information related to the request related to the API service may be provided or stored in the memory (204) of the system (108). Among other capabilities, the processor (202) is configured to fetch and execute computer-readable instructions stored in the memory (204). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0046] The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as Random- Access Memory (RAM), or non-volatile memory such as Electrically Erasable Programmable Read-only Memory (EPROM), flash memory, and the like. In an embodiment, the system (108) may include an interface(s). The interface(s) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output (I/O) devices, storage devices, and the like. The interface(s) may facilitate communication for the system. The interface(s) may also provide a communication pathway for one or more components of the system. Examples of such components include, but are not limited to, processing unit/engine(s) and the database (214). The processing unit/engine(s) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s).
[0047] The information related to the requests related to the API service may further be configured to render on the user interface (206). The user interface (206) may include functionality similar to at least a portion of functionality implemented by one or more computer system interfaces such as those described herein and/or generally known to one having ordinary skill in the art. The user interface (206) may be rendered on the display (208), implemented using Liquid Crystal Display (LCD) display technology, Organic Light-Emitting Diode (OLED) display technology, and/or other types of conventional display technology. The display (208) may be integrated within the system (108) or connected externally. Further the input device(s) (210) may include, but not limited to, keyboard, buttons, scroll wheels, cursors, touchscreen sensors, audio command interfaces, magnetic strip reader, optical scanner, etc.
[0048] The database (214) may be communicably connected to the processor (202) and the memory (204). The database (214) may be configured to store and retrieve the request pertaining to features, or services or workflow of the system (108), access rights, attributes, approved list, and authentication data provided by an administrator.
In another embodiment, the database (214) may be outside the system (108) and communicated through a wired medium and a wireless medium.
[0049] Further, the processor (202), in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (204) may store instructions that, when executed by the processing resource, implement the processor (202). In such examples, the system (108) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the system (108) and the processing resource. In other examples, the processor (202) may be implemented by an electronic circuitry.
[0050] In order for the system (108) to manage the one or more API requests in the communication network (106), the processor (202) includes a retrieving unit (216), a determining unit (218), an updating unit (220), a receiving unit, a parsing unit (226), a selecting unit (228), and a transmitting unit (230). The retrieving unit (216), the determining unit (218), the updating unit (220), the receiving unit (224), the parsing unit (226), the selecting unit (228), and the transmitting unit (230) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine -readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more
processors), to execute such instructions. In the present examples, the memory (204) may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system (108) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the system (108) and the processing resource. In other examples, the processor (202) may be implemented by the electronic circuitry.
[0051] In order for the system (108) to manage the one or more API requests in the communication network (106), the retrieving unit (216), the determining unit (218), the updating unit (220), the receiving unit (224), the parsing unit (226), the selecting unit (228), and the transmitting unit (230) are communicably coupled to each other. The retrieving unit (216) retrieves information pertaining to one or more API calls and performance of one or more API providers from an API call log corresponding to each of one or more API providers. The information can be, for example, but not limited to response time of each of the one or more API providers, latency, availability of each of the one or more API providers, and reliability of each of the one or more API providers.
[0052] Consider, the system (108) has a web application that integrates with several third-party APIs (e.g., weather data, payment processing, and user authentication). The system (108) logs each API call for monitoring and analysis purposes. Each API call log entry contains information such as API provider, timestamp, response, latency, and status code
[0053] The API provider identifies which API was called (e.g., WeatherAPI, PaymentAPI or the like). The timestamp indicates when the API call was made. The response time indicates how long it took for the API to respond. The latency indicates time taken for the request to travel to the API server and back. The status code indicates response status from the API (e.g., 200 OK, 500 Internal Server Error or the like). The retrieving unit (216) queries the logs to extract performance data for each API
provider. The retrieving unit (216) retrieves the following metrics from the logs such as response time, latency, availability and reliability
[0054] The response time indicates an average response time for each API provider over a specific period. The latency indicates an average latency for each API provider. The availability indicates percentage of successful responses (i.e., responses with status code 200) versus failures. The reliability indicates consistency of the API responses over time, including error rates and downtime.
[0055] The determining unit (218) determines an API provider condition of each of the one or more APIs by analysing the retrieved information. In an embodiment, the API provider condition corresponds to at least one of the performance efficiency, the response turnaround time, and the present user load of each of the one or more API providers. In an example, the determining unit (218) analyses the performance and status of one or more API providers by evaluating metrics retrieved from API call logs. The determining unit (218) assesses the condition of each API provider based on performance efficiency, response turnaround time, and current user load, among other factors.
[0056] In an example, the determining unit (218) evaluates how effectively the API provider performs based on response time and latency. For a WeatherAPI, with an average response time of 120 millisecond (ms) and latency of 90 ms, WeatherAPI is performing efficiently. The WeatherAPI has a required availability and lower error rate compared to PaymentAPI. For the PaymentAPI, although, PaymentAPI has a good availability rate (99.5%) and lower error rate, its response time and required latency are compared to the WeatherAPI. This may indicate less efficiency, particularly under high load.
[0057] The updating unit (220) updates the API provider condition of each of the one or more APIs in a configuration file of each of the one or more API providers. In an example, the updating unit (220) updates, details pertaining to the selected API
Y1
provider in a log file. The log file is utilized for training the AI/ML unit, upon selection of the API provider.
[0058] The receiving unit (224) receives the one or more API requests via the UE (102). The parsing unit (226) parses, utilizing the AI/ML model, the configuration file corresponding to each of the one or more API providers upon receipt of the one or more API requests. In an example, the parsing unit (226), with AI/ML models, automatically processes and interprets configuration files when API requests are received. This AI/ML-driven parsing helps in extracting and understanding essential configuration parameters like endpoints, authentication methods, rate limits, and timeouts, enabling efficient and accurate handling of the API requests. The use of AI/ML enhances the ability to manage complex configurations and adapt to changes more effectively. In an example, the AI/ML model processes the configuration file to extract and understand relevant parameters such as named entity recognition, classification and pattern recognition. The named entity recognition identifies entities such as endpoints, tokens, and rate limits. The classification categorizes the configuration details into predefined classes (e.g., authentication type, rate limits or the like). The pattern recognition detects the patterns or anomalies in the configuration settings (e.g., unusually high rate limits).
[0059] The selecting unit (228) selects, utilizing the AI/ML model, the API provider of the one or more API providers based on the updated API provider condition provided in the configuration file. In an embodiment, the selecting unit (228) compares the API provider condition corresponding to each of the API provider. Further, the selecting unit (228) selects the API provider with a required performance efficiency, required response turnaround time, and a required user load based on the comparison. Consider, the system (108) have a web application that needs to fetch data from a weather API. The system (108) considers two API providers, such as WeatherAPI and WeatherServiceX, and the system (108) want to select the one that offers the best performance efficiency, has a lower response turnaround time, and a lighter user load.
[0060] For the WeatherAPI, the performance efficiency is 90% (measured by the ratio of successful responses to total requests), an average response turnaround time is 120 ms, and current user load is 500 requests per minute. For the WeatherServiceX, the performance efficiency is 85%, the average response turnaround time is 150 ms and current user load is 300 requests per minute. The selecting unit (228) must prioritize required performance efficiency and required response turnaround time while considering user load. Based on the above condition, the WeatherAPI is selected due to its superior performance efficiency and response turnaround time, despite the required user load. The slight advantage in user load for WeatherServiceX is outweighed by the better overall performance of WeatherAPI. Further, the transmitting unit (230) transmits the one or more API requests to the selected API provider.
[0061] The example for managing the one or more API requests in the network is explained in FIG. 4 and FIG. 6.
[0062] FIG. 3 is an example schematic representation of the system (300) of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE (102-1) and the system (108) for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0063] As mentioned earlier, the first UE (102-1) includes one or more primary processors (305) communicably coupled to the one or more processors (202) of the system (108). The one or more primary processors (305) are coupled with a memory (310) storing instructions which are executed by the one or more primary processors (305). Execution of the stored instructions by the one or more primary processors (305) enables the UE (102-1). The execution of the stored instructions by the one or more primary processors (305) further causes the UE (102-1) to transmit, one or more Application Programming Interface (API) requests to the one or more processers (202).
[0064] As mentioned earlier, the one or more processors (202) is configured to transmit a response content related to the API call request to the UE (102-1). More specifically, the one or more processors (202) of the system (108) is configured to transmit the response content to at least one of the UE (102-1). A kernel (315) is a core component serving as the primary interface between hardware components of the UE (102-1) and the system (108). The kernel (315) is configured to provide the plurality of response contents hosted on the system (108) to access resources available in the communication network (106). The resources include one of a Central Processing Unit (CPU), memory components such as Random Access Memory (RAM) and Read Only Memory (ROM).
[0065] As per the illustrated embodiment, the system (108) includes the one or more processors (202), the memory (204), the input/output interface unit (206), the display (208), and the input device (210). The operations and functions of the one or more processors (202), the memory (204), the input/output interface unit (206), the display (208), and the input device (210) are already explained in FIG. 2. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure. Further, the processor (202) includes the retrieving unit (216), the determining unit (218), the updating unit (220), the receiving unit (224), the parsing unit (226), the selecting unit (228), and the transmitting unit (230). The operations and functions of the retrieving unit (216), the determining unit (218), the updating unit (220), the receiving unit (224), the parsing unit (226), the selecting unit (228), and the transmitting unit (230) are already explained in FIG. 2. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure.
[0066] FIG. 4 illustrates a system architecture (400) for managing one or more API requests in the network (106), in accordance with some embodiments. The system architecture (400) comprises a common API gateway (422), an API consumer (402) communicably connected to the common API gateway (422) via the communication network (106), and an API services repository (424) communicably connected to the common API gateway (422) via the network (106). In an embodiment of the present
invention, the common API gateway (422) may be a part of a subscriber system. The common API gateway (422) may be used to expose, secure, and manage backend applications, infrastructure and/or network systems as published APIs. The API consumer (420) may communicate with the common API gateway (422) for accessing the published APIs. In one embodiment of the present invention, the API services repository (424) may be a part of the common API gateway (422).
[0067] An API orchestration configuration unit (410), an API subscriber state service and eviction rule configuration unit (412), an API synchronization call unit (414), an API response collection unit (416), an API subscriber state service and eviction rule engine (418) and an API asynchronization call unit (420) are included in the API gateway (422).
[0068] The API orchestration configuration unit (410) acts as a control center for managing how APIs work together, ensuring that they function harmoniously to deliver the desired services or results in the network (106). The API synchronization call unit (414) is a specialized component designed to manage the coordination and execution of multiple API requests in a synchronized manner, ensuring that they work together effectively within the network (106). The API response collection unit (416) is a key component in managing and processing the results from multiple API requests. The API response collection unit (416) plays a crucial role in aggregating, parsing, and consolidating responses to ensure that the final output is accurate, complete, and useful within the network (106). The API asynchronization call unit (420) is a key component in managing API requests that are handled asynchronously. The API asynchronization call unit (420) enables the efficient processing of multiple requests without requiring sequential execution, improves performance and scalability, and enhances the responsiveness of applications within a network environment. The API provider load distributor rule configuration unit (412) is used for defining and managing the rules and policies that determine how incoming API requests are distributed across various API providers or endpoints. The API provider load distributor rule configuration engine (418) executes and manages the load distribution rules defined by the
configuration unit (412). The API provider load distributor rule configuration engine (418) is responsible for the actual application of the rules and the real-time handling of the API requests.
[0069] In an exemplary embodiment, the common API gateway (422) comprises the The API provider load distributer rule configuration unit (412) and the API provider load distributer rule configuration engine (418) configured to run AI/ML based process for automatically managing the data of the API provider. In an embodiment, the AI/ML based process is implemented for automatically identifying and evicting the data of the API provider from the API services repository (424).
[0070] The common API gateway (422) is a provisioning server hosting an application logic for create/modify/display/delete of subscription information, Authentication information, equipment information. The common API gateway (422) supports NETCONF/SSH and Restful/HTTP interfaces. The common API gateway (422) supports both client and server-side validation of input parameters for syntax and semantic checks. The common API gateway (422) provides lightweight CLI for all provisioning requirements. The common API gateway (422) may communicate with the CAPIF (408), an IAM unit (406) and an Edge Load Balancer (ELB) unit (404a, 404b). The CAPIF (408) is a complete 3rd Generation Partnership Project (3GPP) API framework that covers functionality related to on-board and off-board API consumers, register and release APIs. The IAM unit (406) is used for authentication and authorization of the API consumers (402). The ELB units (404a, 404b) automatically distributes incoming application traffic across multiple targets and virtual appliances in one or more availability regions.
[0071] Further, the API configurable orchestration unit (410) allows multiple ways of routing East Bound API calls to multiple Westbound API calls. The dynamic transformation and manipulation of API data enables the capability of transforming request as per destination application and also transform response as required by the user (e.g. service provider, or the like). The dynamic transformation and manipulation of the API data further performs a body to body transformation and manipulation, a
query param transformation and manipulation, and a header transformation and manipulation. By using the API configurable orchestration unit (410), a template based API provisioning allows the user to create and manage APIs on-demand using the API gateway (422). The API configurable orchestration unit (410) improves the agility, flexibility, and cost-efficiency of the API development and management process as the API is integrated dynamically.
[0072] The API provider load distributer rule configuration unit (412) and the API provider load distributer rule configuration engine (418) may be configured to read an API provider load distributer rule engine configuration file. Upon reading the API provider load distributer rule engine configuration file successfully, the API provider load distributer rule configuration unit (412) and the API provider load distributer rule configuration engine (418) may implement the AI/ML based algorithms to take decision on real time request coming from customers. The decision may be based on how to handle the real time request, how to forward each real time request to which API provider. The AI/ML based algorithm may make the decision based on a number of fixed parameters. Examples of the number of fixed parameters may include, but not limited to, total APIs request coming to the CAPIF (408), total request coming to the API provider wise, minimum/Maximum/ Average response time by the API provider, standard usage quotas to be consumed over a longer time period (e.g. total subscriptions, total resources such as calls and bandwidth), rate limiting based on subscriptions, APIs, resources, IP, geo-location, bandwidth, request payload (e.g. headers), user/access token, OAuth Token claims, request methods (e.g. GET, POST) and traffic spikes, rate limiting based complex, and extensible and dynamic rules, scenarios and events.
[0073] On the basis of the above mentioned fixed parameters, the AI/ML based algorithm may implement an API provider load distributer. Once, the API request lands on the system architecture (400), the system architecture (400) may analyze the past data of API provider and redirect the inbound request to that the API provider who can serve the best.
[0074] In an embodiment, predefined parameters may already be defined in API provider load distributer rule engine Configuration such as: a. Total APIs request coming to the CAPIF (408), b. Total request coming to API Provider wise, c. Minimum/Maximum/ Average Response time by the API provider, d. Standard usage quotas to be consumed over a longer time period (e.g. total subscriptions, total resources such as calls and bandwidth), e. Rate limiting based on subscriptions, APIs, resources, IP, geo-location, bandwidth, request payload (e.g. headers), user/access token, OAuth Token claims, request methods (e.g. GET, POST) and traffic spikes, and f. Rate limiting based complex, extensible and dynamic rules, scenarios and events.
[0075] All the parameters and rules may be run time configurable and may be added dynamically as per requirements by using the API provider load distributer rule configuration unit (412) and the API provider load distributer rule configuration engine (418).
[0076] FIG. 5 is a flow diagram (500) illustrating the method for managing one or more API requests in the network (106), according to various embodiments of the present disclosure.
[0077] At 502, the method includes retrieving the information pertaining to the one or more API calls and the one or more API provider’s performance from the API call log corresponding to each of the one or more API providers. In an embodiment, the method allows the retrieving unit (216) to retrieve the information pertaining to the one or more API calls and the one or more API provider’s performance from the API call log corresponding to each of the one or more API providers.
[0078] At 504, the method includes determining the API provider condition of or corresponding to each of the one or more APIs by analysing the retrieved information. In an embodiment, the method allows the determining unit (218) to determine the API provider condition of each of the one or more API providers by analysing the retrieved information.
[0079] At 506, the method includes updating the API provider condition of each of the one or more APIs in the configuration file of each of the one or more API providers. In an embodiment, the method allows the updating unit (220) to update the API provider condition of each of the one or more APIs in a configuration file of each of the one or more API providers.
[0080] At 508, the method includes receiving the one or more API requests via the UE (102). In an embodiment, the method allows the receiving unit (224) to receive the one or more API requests via the UE (102).
[0081] At 510, the method includes parsing utilizing the AI/ML model, the configuration file corresponding to each of the one or more API providers upon receipt of the one or more API requests. In an embodiment, the method allows the parsing unit (226) to parse, utilizing the AI/ML model, the configuration file corresponding to each of the one or more API providers upon receipt of the one or more API requests.
[0082] At 512, the method includes selecting, utilizing the AI/ML model, the API provider of the one or more API providers based on the updated API provider condition provided in the configuration file. In an embodiment, the method allows the selecting unit (228) to select, utilizing the AI/ML model, the API provider of the one or more API providers based on the updated API provider condition provided in the configuration file.
[0083] At 514, the method includes transmitting the one or more API requests to the selected API provider. In an embodiment, the method allows the transmitting unit (230) to transmit the one or more API requests to the selected API provider.
[0084] FIG. 6 is an example flow diagram (600) illustrating an internal call flow for managing one or more API requests in the network (106), in accordance with some embodiments.
[0085] At 602, an API call may start. At 604, an API provider load distribution configuration policy in the API provider load distributer rule configuration engine (418) may be implemented. At 606, an API provider load distributor rule and condition may be identified based on the condition present in API provider load distributor rule and condition policy configuration.
[0086] At 608, an API analysis and management may be initiated on a data basis as per API provider load distributor rule policy configuration. At 610, the API call is forwarded to the API provider as per internal logic and configuration and response may be served back to the subscriber. At 612, the API provider load distributor logs may be stored for monitoring purpose.
[0087] Below is the technical advancement of the present invention:
[0088] The proposed method can be used to suggest which the API provider is more efficient in terms of performance.
[0089] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIGS. 1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0090] Method steps: A person of ordinary skill in the art will readily ascertain that the illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0091] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0092] Environment - 100
[0093] UEs- 102, 102-1-102-n
[0094] Server - 104
[0095] Communication network - 106
[0096] System - 108
[0097] Processor - 202
[0098] Memory - 204
[0099] User Interface - 206
[00100] Display - 208
[00101] Input device - 210
[00102] Database - 214
[00103] Retrieving unit - 216
[00104] Determining unit - 218
[00105] Updating unit - 220
[00106] Receiving unit - 224
[00107] Parsing unit - 226
[00108] Selecting unit - 228
[00109] Transmitting unit- 230
[00110] System - 300
[00111] Primary processors -305
[00112] Memory- 310
[00113] Kernel- 315
[00114] System architecture - 400
[00115] API consumer - 402
[00116] ELB unit - 404a, 404b
[00117] IAM unit - 406
[00118] CAPIF - 408
[00119] API orchestration configuration unit - 410
[00120] API provider load distributer rule configuration unit - 412
[00121] API synchronization call unit - 414
[00122] API response collection unit - 416
[00123] API provider load distributer rule configuration engine - 418
[00124] API asynchronization call unit - 420
[00125] API gateway - 422
[00126] API service repository - 424
Claims
1. A method of managing one or more Application Programming Interface (API) requests in a network (106), the method comprising the steps of: retrieving, by one or more processors (202), information pertaining to one or more API calls and one or more API provider’s performance from an API call log corresponding to each of the one or more API providers; determining, by the one or more processors (202), an API provider condition of each of the one or more APIs by analysing the retrieved information; updating, by the one or more processors (202), the API provider condition of each of the one or more APIs in a configuration file of each of the one or more API providers; receiving, by the one or more processors (202), the one or more API requests via at least one User Equipment (UE) (102); parsing, by the one or more processors (202), utilizing an Artificial Intelligence/Machine Learning (AI/ML) model, the configuration file corresponding to each of the one or more API providers upon receipt of the one or more API requests; selecting, by the one or more processors (202), utilizing the AI/ML model, the API provider of the one or more API providers based on the updated API provider condition provided in the configuration file; and transmitting, by the one or more processors (202), the one or more API requests to the selected API provider.
2. The method as claimed in claim 1 , wherein the information comprises response time of each of the one or more API providers, latency, availability of each of the one or more API providers, and reliability of each of the one or more API providers.
3. The method as claimed in claim 1, wherein the API provider condition corresponds to at least one of a performance efficiency, a response turnaround time, a present user load of each of the one or more API providers.
4. The method as claimed in claim 1 , wherein on selection of the API provider, the method comprises the step of updating, by the one or more processors (202), details pertaining to the selected API provider in a log file, wherein the log is utilized for training the AI/ML model.
5. The method as claimed in claim 1, wherein on parsing, the method comprises the steps of: comparing, by the one or more processors (202), the API provider condition corresponding to each of the API provider; and selecting, by the one or more processors (202), the API provider with at least one of a required performance efficiency, required response turnaround time, and a required user load based on the comparison.
6. A system (108) for managing one or more Application Programming Interface (API) requests in a network (106), the system (108) comprising: a retrieving unit (216) configured to retrieve, information pertaining to one or more API calls and performance of one or more API providers from an API call log corresponding to each of the one or more API providers; a determining unit (218) configured to determine, an API provider condition of each of the one or more APIs by analysing the retrieved information; an updating unit (220) configured to update, the API provider condition of each of the one or more APIs in a configuration file of each of the one or more API providers; a receiving unit (224) configured to receive, the one or more API requests via at least one User Equipment (UE) (102);
a parsing unit (226) configured to parse utilizing an Artificial Intelligence/Machine Learning (AI/ML) model, the configuration file corresponding to each of the one or more API providers upon receipt of the one or more API requests; a selecting unit (228) configured to select utilizing the AI/ML model, the API provider of the one or more API providers based on the updated API provider condition provided in the configuration file; and a transmitting unit (230) configured to transmit the one or more API requests to the selected API provider.
7. The system (108) as claimed in claim 6, wherein the information comprises response time of each of the one or more API providers, latency, availability of each of the one or more API providers, and reliability of each of the one or more API providers.
8. The system (108) as claimed in claim 6, wherein the API provider condition corresponds to at least one of a performance efficiency, a response turnaround time, a present user load of each of the one or more API providers.
9. The system (108) as claimed in claim 6, wherein the updating unit (220) is configured to update, details pertaining to the selected API provider in a log file, wherein the log file is utilized for training the AI/ML unit, upon selection of the API provider.
10. The system (108) as claimed in claim 6, wherein the selecting unit (228) is configured to: compare, the API provider condition corresponding to each of the API provider; and
select, the API provider with at least one of a required performance efficiency, required response turnaround time, and a required user load based on the comparison.
11. A non-transitory computer-readable medium having stored thereon computer- readable instructions that, when executed by a processor (202), cause the processor (202) to: retrieve, information pertaining to one or more API calls and one or more API providers performance from an API call log corresponding to each of the one or more API providers; determine, an API provider condition of each of the one or more API providers by analysing the collected information; update, the API provider condition of each of the one or more APIs in a configuration file of each of the one or more API providers; receive, the one or more API requests via at least one User Equipment (UE); parse, the configuration file corresponding to each of the one or more API providers upon receipt of the one or more API requests; select, an API provider of the one or more API providers based on the updated API provider condition provided in the configuration file; and transmit, the one or more API requests to the selected API provider.
12. A User Equipment (UE) (102), comprising: one or more primary processors (305) communicatively coupled to one or more processors (202) of a system (108), the one or more primary processors (305) coupled with a memory (310), wherein said memory (310) stores instructions which when executed by the one or more primary processors (305) causes the first UE (102-1) to: transmit, one or more Application Programming Interface (API) requests to the one or more processers (202);
wherein the one or more processors (202) is configured to perform the steps as claimed in claim 1.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN202321062051 | 2023-09-14 | ||
IN202321062051 | 2023-09-14 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2025057244A1 true WO2025057244A1 (en) | 2025-03-20 |
Family
ID=95021031
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IN2024/051761 Pending WO2025057244A1 (en) | 2023-09-14 | 2024-09-14 | System and method of managing one or more application programming interface (api) requests in network |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2025057244A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190205857A1 (en) * | 2017-12-29 | 2019-07-04 | Square, Inc. | Application programming interfaces for structuring distributed systems |
US20210081260A1 (en) * | 2019-09-18 | 2021-03-18 | Moesif, Inc. | Management of application programming interface (api) retention |
US20230106091A1 (en) * | 2021-10-01 | 2023-04-06 | Hewlett Packard Enterprise Development Lp | Centralized application programming interface (api) broker for providing services offered by multiple service platforms |
-
2024
- 2024-09-14 WO PCT/IN2024/051761 patent/WO2025057244A1/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190205857A1 (en) * | 2017-12-29 | 2019-07-04 | Square, Inc. | Application programming interfaces for structuring distributed systems |
US20210081260A1 (en) * | 2019-09-18 | 2021-03-18 | Moesif, Inc. | Management of application programming interface (api) retention |
US20230106091A1 (en) * | 2021-10-01 | 2023-04-06 | Hewlett Packard Enterprise Development Lp | Centralized application programming interface (api) broker for providing services offered by multiple service platforms |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10740711B2 (en) | Optimization of a workflow employing software services | |
US20230185807A1 (en) | Reducing database system query transaction delay | |
US8954978B1 (en) | Reputation-based mediation of virtual control planes | |
US10367878B2 (en) | Optimization of path selection for transfers of files | |
US9882773B2 (en) | Virtual resource provider with virtual control planes | |
US11570279B1 (en) | Automated port configuration management in a service mesh | |
US11522948B1 (en) | Dynamic handling of service mesh loads using sliced replicas and cloud functions | |
US9553774B2 (en) | Cost tracking for virtual control planes | |
US10644947B2 (en) | Non-invasive diagnosis of configuration errors in distributed system | |
US20220224606A1 (en) | System and method for on-demand network communication | |
US11481268B2 (en) | Blockchain management of provisioning failures | |
US12020272B2 (en) | Market segment analysis of product or service offerings | |
WO2017171732A1 (en) | Methods and systems for distributed testing of network configurations for zero-rating | |
US20230367563A1 (en) | Assembling low-code applications with observability policy injections | |
US20230217343A1 (en) | Intelligent service mesh data compression | |
WO2025057244A1 (en) | System and method of managing one or more application programming interface (api) requests in network | |
WO2025052483A1 (en) | System and method for handling api call flows | |
WO2025057232A1 (en) | System and method for managing one or more application programming interfaces (apis) | |
WO2025017637A1 (en) | Method and system for performing a dynamic application programming interface (api) orchestration | |
US11586626B1 (en) | Optimizing cloud query execution | |
WO2025052455A1 (en) | System and method for identifying state of subscriber | |
WO2025057229A1 (en) | System and method for managing resources for container network function (cnf) instantiation | |
WO2025057243A1 (en) | System and method to manage routing of requests in network | |
WO2025079092A1 (en) | Method and system for predicting performance trends of one or more network functions | |
EP3226478B1 (en) | Methods and systems for distributed testing of network configurations for zero-rating |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24864949 Country of ref document: EP Kind code of ref document: A1 |