[go: up one dir, main page]

WO2026006311A1 - Medical image management with channel selection and load balancing - Google Patents

Medical image management with channel selection and load balancing

Info

Publication number
WO2026006311A1
WO2026006311A1 PCT/US2025/035045 US2025035045W WO2026006311A1 WO 2026006311 A1 WO2026006311 A1 WO 2026006311A1 US 2025035045 W US2025035045 W US 2025035045W WO 2026006311 A1 WO2026006311 A1 WO 2026006311A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
viewer
request
server
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2025/035045
Other languages
French (fr)
Inventor
Alaguraj SUNDARARAJ
Josip CERMIN
Gayathri DEVARAJ
Gayathry ASHOKKUMAR
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Live Medica Ip LLC
Original Assignee
Live Medica Ip LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Live Medica Ip LLC filed Critical Live Medica Ip LLC
Publication of WO2026006311A1 publication Critical patent/WO2026006311A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5683Storage of data provided by user terminals, i.e. reverse caching

Definitions

  • es6-iServer memory management [0092] Memory management within an es6-iServer package ensures optimal utilization of the available physical memory in the exemplary medical image management system. The process of memory management occurs in various stages throughout the request processing lifecycle, enabling efficient operation while maintaining a balance between memory reservation and usage. [0093] The memory management of es6-iServer reserves the memory for its processing at first level. When a new request is received, the required memory is compared with the reserved system memory. The request may proceed only when the ⁇ memory limit is satisfied. A request outside the threshold limit may be returned with prompt for memory shortage and it may try to access the exemplary system after some time.
  • the memory management process ensures that the exemplary medical image management system optimally utilizes its physical memory. Memory resources are well allocated for various processing tasks, which helps to maintain system performance and prevent memory-related issues.
  • Memory management process starts with the es6-iServer package being configured with a memory threshold. The memory threshold is a percentage of the total physical memory available in the exemplary system, which is stored in the DiRC database associated with the server.
  • the first stage of memory management involves periodic checks initiated by an internal process within the es6-iServer. The periodic check runs at specific intervals and assesses the current availability of physical memory within the system. [0097] During this stage, the exemplary system involves interactions with the image receiver service.
  • Fig.8A shows a process of calculating and registering the filesize information of a medical image in the DiRC database.
  • Fig.8B shows a process of retrieving the filesize information of a medical image from the DiRC database and returning the filesize information to the web viewer.
  • Fig.8C shows a process of retrieving the required memory (filesize) from the viewer request’s parameters and retrieving the memory threshold limit from the DiRC database.
  • the second stage of the memory management process starts when the es6- iServer calculates a total consumed memory, which includes the required memory by the viewer request and the memory reserved by the es6-iServer channels for other wado processes.
  • Edge computing architecture [0102]
  • the exemplary medical management system may have an edge computing architecture, Enterprise-Flux, to improve its performance.
  • the Enterprise-Flux architecture comprises a “Flux-Central” server and several ‘Flux-Pod’ servers that are distributed clusters of sub-servers pertaining to different sites/locations. Flux-Central may deliver end to end workflow operations by default.
  • Fig.9 shows an embodiment of the exemplary medical image management system configured with edge computing architecture.
  • the exemplary edge computing architecture includes the es6-iServer, which may be installed in "Flux-Central" or "Flux-Pods" to distribute a large volume of concurrent requests and deliver high-quality images on time.
  • es6-iServer may be deployed in multiple Flux-Pod’s, which may scale up the exemplary system horizontally.
  • the files required for rendering will be available in the es6-iServer’s local pods, so the exemplary system may deliver medical image at high speed by using proximity of data and better bandwidth availability.
  • Operation of es6-iServer’ s edge computing architecture.
  • an image receiver storage SCP channel
  • the image is stored in the file store in decompressed image format either in local drive or in a network drive.
  • the image’s related metadata information pertaining to an organization is consolidated and synchronized to the Flux-Central using a pod-central synchronization service.
  • the es6- iServer registers the metadata information of the consolidated image after two-stage consistency validation and registers the image in the database. Now the images are available in the application (i.e., web viewer) for access by users.
  • the application i.e., web viewer
  • the application server runs in Flux-Central.
  • the users from the pod environment access the application and the application data from the database is accessed from Flux-Central. As the application data will be lesser sized, the transfer of the application data to the users will not have much latency.
  • the web viewer may view the medical images rendered by the es6-iServer from the server-side. As the medical images are available in the pod environment’s local storage, the es6-iServer also runs in the pod environment.
  • the web application includes a ⁇ pod discovery client, which keeps track of the image availability in the pod/Central.
  • the pod discovery client shares the target pod server information to the web viewer.
  • the web viewer triggers an image preparation request for a study/series directly to the pod environment server.
  • the image request received by the DiRC service running in the pod environment.
  • the DiRC checks for the available es6-iServer channel and forwards the request.
  • FIG.10 shows an embodiment of a flux-pod and a flux-central in the edge- computing architecture.
  • Session management [0112] An interaction between users and the exemplary medical image management system is known as a session.
  • Session management involves the maintenance and control of user sessions.
  • Session management includes some functions: 1. Authentication: Authentication function confirms the request's identity. After authentication, a request is given access to particular system resources and features. 2. Session creation: When a user starts a session, a session identifier (ID) is created and assigned to the session. The session ID may be a session token or cookie. On the server, several sessions are distinguished from one another using the session identifier. 3. Session Tracking: The server may keep track of each request's session, tying it to the request's identity, and saving session information.
  • Session management process may include two stages.
  • the first stage of the session management process starts when web viewer sends study request to the image web server (DiRC) via reverse proxy server.
  • This request generally contains a session ID that was created when the user logged in to the ⁇ application.
  • DiRC identifies an image manager to forward the request along with the session ID.
  • the image manager then calls an API in Flux Central to authenticate the session ID. If the authentication is successful, the image manager creates a new wado process for the session ID. If the authentication fails, then the image manager returns error as invalid session, back to the web viewer, so the viewer should re-log in.
  • Fig.11A shows the first stage of the session management process.
  • the wado process holds the session ID in its internal memory and processes the request and responds back to the web viewer via image web server (i.e., DiRC).
  • the DiRC then stores the session information as transaction_upstreams. Any further image processing requests from the web viewer with the same session ID will first get validated by the DiRC. If the DiRC validates and identifies the wado process for the session, it directly routes the processing requests to the wado process, skipping the Image manager. Wado process will process the requests and respond back to the web viewer via DiRC.
  • OAuth 2.0 authentication [0118]
  • the es6-iServer may be implemented with OAuth 2.0 authentication to make communication with application server secured via an authenticated framework.
  • OAuth is an open standard used for granting access to functionality, data, etc. without having to deal with the original authentication.
  • OAuth allows a user to grant access to a client application to the user's protected resources, without revealing the user’s credentials. OAuth does this by granting the requesting client application a token, after user approves access. Each token grants limited access to specified resources for a specific period.
  • OAuth 2.0 authorization Package registration in OAuth server: When a customer registers a new package, the system may check the customer for their client ID and secret key. When the client ID and secret key exist, the system will return the existing client ID and secret key. When the client ID and secret key do not exist, the system will create a new client ID and secret key for the customer.
  • Client ID and secret key storage The obtained client ID and secret key are stored in a master database. The storage is associated with the package issued to a particular customer.
  • TTL Time-to-live
  • Token generation process When initiating a channel, the system will obtain the channel ID and relevant parent package information from the channel configuration. The parent package information will contain the associated client ID and secret key. Using the channel ID, client ID, and secret key, the system will invoke the token generation API to acquire an OAuth token.
  • Token management When an OAuth token does not exist for the requested channel, the system will create a new token. The generated token will be stored in the master database and also in a Redis cache. The system will respond to the API request with the generated token. When an OAuth token already exists, the system will check its validity. When the token is expired, the system will generate a new token and respond. When the token is still valid, the system will return the existing token.
  • Fig.12A shows the client registration process of the OAuth 2.0 Authorization.
  • OAuth authentication Fig.12B shows the OAuth authentication feature in the OAuth server.
  • Dedicated Disk IO Handler In this setup, the es6-iServer running in a server configured to a dedicated disk where the dicom images are stored and retrieved. Hence the disk IO handler control on read permission is applied for the resource management pertaining to the particular server machine where es6-iServer is running.
  • the disk io handler manages the read request received from es6-iServer running in more than one server machines and the dicom filestore is shared between multiple es6-Server.
  • the disk IO handler maintain queue based on each server, and control on read permission is applied for the resource management pertaining to the particular server machine where es6-iServer is running.
  • How File IO Handler works? [0126] In the es6-iServer architecture, incoming requests are managed through its child wado processes. These wado processes, in turn, utilize concurrent threads for file read requests. The pivotal component responsible for efficiently handling these file read requests is the File IO handler.
  • the Disk I/O Controller is composed of several critical components to manage and optimize disk access across multiple servers: [0127] Server-Specific Request Queues: The Disk I/O Controller maintains a server- specific request queue for each ES6 server that is accessing the shared disk. Each queue ⁇ holds incoming disk read requests from the child processes running on that specific ES6 server. This ensures that requests are handled independently for each server. [0128] Server-Based Thread Pool: The Disk I/O Controller uses a server-based thread pool, where each server has a configured number of threads for handling disk read operations. The number of threads available for concurrent disk reads depends on the server’s available CPU cores and the configured threshold, ensuring that disk I/O operations are managed based on resource availability.
  • Threshold Limiting The Disk I/O Controller checks the number of active disk reads for each server. If a server exceeds its configured limit for concurrent disk reads (threshold), the new read requests from that server are queued and placed in the server’s specific queue until a thread becomes available.
  • TCP Socket Communication Child processes from multiple servers communicate with the Disk I/O Controller via TCP sockets. When a child process needs to read a file, it sends a request to the Disk I/O Controller over the socket. If the request is permitted, the child process begins reading the file directly from the disk. After the file is read successfully, the child process sends a confirmation message back to the Disk I/O Controller.
  • the request is added to the server-specific request queue, and the child process is instructed to wait.
  • Permission Granting If the server's active disk read count is below the threshold, the Disk I/O Controller grants permission to the requesting child process to read from the disk. This is communicated back to the child process over the TCP socket, allowing it to proceed with the read.
  • Direct Disk Read After receiving permission, the child process directly accesses the shared disk and retrieves the requested DICOM file. The file is then loaded into memory for processing by the child process itself.
  • Completion Notification Once the child process has successfully loaded the DICOM file into memory, it sends a read completion notification back to the Disk I/O Controller via the same TCP socket.
  • Thread Pool Management After receiving the completion message, the Disk I/O Controller decreases the count of active threads for the requesting server. This allows the next request in the server-specific queue to proceed, granting the corresponding child process permission to read the next file.
  • Queue Management The Disk I/O Controller processes requests in the order they arrive in the server-specific request queue, maintaining fairness. Once the active thread count drops below the server’s threshold, the next request in the queue is granted permission to proceed.
  • Concurrency Control Server-Specific Resource Management
  • the Disk I/O Controller effectively manages concurrency for disk reads through the following mechanisms: [0142] Server-Specific Request Queues: The controller handles disk read requests for each server separately. Each server has its own request queue, ensuring that requests are processed based on the resource utilization and availability specific to that server. This separation prevents the overload of a single server from affecting the others. [0143] Dynamic Thread Pool Management: The thread pool is configured based on the CPU cores of the server running the Disk I/O Controller. Each server can have a different number of concurrent disk read threads, allowing for flexibility in managing system resources.
  • Threshold-Based Queueing The Disk I/O Controller checks the number of active disk reads per server. If the number of active reads exceeds the threshold for a particular server, new requests from that server are queued. The system grants permission for disk reads on a first-come, first served basis, ensuring fairness in handling requests for each server.
  • Key features of file IO handler ⁇ [0146] The wado process created under any es6-iServer channel will be performing the image computing. Hence it has to read and load the image from the filestore into the physical memory.
  • R1 read process thread limit in the wado process.
  • the wado process in read function will maintain the list of file that have to be read.
  • the wado process will make concurrent request with upper limit as R1.
  • the R1 will be concurrent request limit for each wado process running in one or multiple es6-iServer channels.
  • R1 value should be configured relative to the number of logical processor for better performance outcome.
  • R1 Thread limit of Thread Pool in the File IO Handler.
  • the File IO handler thread pool considers the processing (read) thread limit configuration as R1.
  • Thread limit have to be configured according to the number of logical processor of the es6-iserver server, but safer side to consider the number of physical core should be best. Considering the thread limit by CPU core alone is not the efficient approach. Depending on the HDD/SSD read capacity vs the file size to be read by [0148] the thread, the thread limit has to be arrived.
  • Thread Limit Min(CPU Cores, Disk I/O Threads)
  • the es6-iServer channels can be configured based on the image types such as CT, MR, MG etc., as we know the CT, MR modality standard file size are 512 kb. Hence for such modalities, in both HDD and SSD we can use the CPU core as thread limit.
  • image types such as US, RF, BTO which are muliframe images, the files stored frame by frame. Each frame size might be lesser than 3 MB. Hence in such cases also the thread limit can be considered by number of CPU cores.
  • the size of the image may be ranged from 10 MB to 150MB.
  • the thread limit have to be based on the disk io threads rather than cpu cores.
  • the memory management of es6-iServer bring the idea of reserving the memory for its processing at first level.
  • the required memory is checked with cumulative reserved memory, therefore permitted to proceed only when the memory threshold limit is satisfied.
  • the first come request is served first and the request received outside the threshold limit will be returned back with prompt for memory shortage and try to access after sometime.
  • the memory management strategy starts with the es6-iServer package being configured with a predetermined memory threshold. This threshold is a percentage of the total physical memory available in the system. This configuration is stored in the DiRC database associated with the server.
  • the first stage of memory management involves periodic checks initiated by an internal process within the es6-iServer.
  • the first stage of memory management process also involves interactions with the image receiver service. This service records the file sizes of medical images during their registration, storing this information in the database. When a request is received by DiRC, it evaluates whether the sum of the required file size and the total used physical memory is below the predefined threshold limit. If this condition is not met, an error is returned to the web viewer.
  • the request is forwarded to the es6-iServer for further processing.
  • the second stage of memory control occurs during the study request assignment to the available wado processes by the es6-iServer channels for studies, series, or other elements.
  • the es6-iServer examines the cumulative required memory for the wado process along with the memory reserved by the es6-iServer channels for other wado ⁇ processes. It checks whether this sum remains below the predefined memory threshold limit. Only when this condition is met does the es6-iServer proceed to create the wado process, updating the DiRC database with the newly reserved memory. Importantly, the reserved memory is efficiently managed throughout the system's lifecycle.
  • the Flux-Cloud can deliver end to end workflow operations by default. To achieve horizontal scalability - edge computing - higher performance, the ‘Flux-Pod’ server deployments can be engaged based on need.
  • This enterprise solution includes the es6-iServer, which is intended to be installed in "Flux-Cloud” or multitudinous "Flux-Pods” so Edge computing allows distribution of large volume concurrent requests and deliver high-quality images on time. Based on the need es6-iServer may be deployed in multiple Flux-Pod’s and therefore makes the infrastructure capable to scaled up Horizontally. The files required for rendering will be available in its local POD’s and hence can enhance the delivery at most superior speed by use of proximity of data and better bandwidth availability.
  • the image receiver (Storage SCP channel) running in the flux-pod server, receives the medical image from the medical imager.
  • the images are initially stored in a local edge file system, often in a decompressed or cache-optimized format. This supports image rendering from the local and the readiness for the Image Viewing workflow is Enhanced.
  • metadata is immediately extracted and registered in the local flux database.
  • Patient Demographics e.g., Name, DOB, Gender, Account Number
  • Study Information e.g., Study Information UID, Study Date, Modality
  • Series and SOP Instance Details This local handling allows for fast reception and stream image rendering by the image service (es6-iserver) from its local (proximity) server that gives the advantage for faster response to the Viewer.
  • the patient consistency check is performed as a first stage of cleansing the received data at the source and then upon the identification of the correct patient, the study consolidation process is performed at the edge. As a result of the process, the study object structures are prepared and migrated the information to the enterprise through API, therefore the study is registered in the enterprise database.
  • the application server is run in the enterprise.
  • the radiologist from the POD environment access the application, application data from the enterprise and its database.
  • the application data will be lesser sized, the transfer of the application data will not have much latency while serving to the radiologist or other user.
  • the web viewer will be used by Radiologist to view the medical images which has to be rendered by the es6-iServer from the server-side.
  • the es6-iServer also run in the POD ⁇ environment.
  • the web application included with a POD discovery client which keeps the track of the image availability in the POD/Cloud.
  • the POD discovery client share the target POD server information to the web viewer.
  • request to the application server.
  • the Web Viewer triggers the Image study preparation request for a study/series directly to the POD environment server.
  • the image study preparation request received by the DiRC service that running in the POD environment.
  • the DiRC then check for the available es6-iServer channel and forward the request.
  • the es6-iServer channel creates assigns the study request to available wado process and thereby the image rendering is performed from the POD.
  • the wado process read and load the image file which has been stored in its local environment under the control of the File IO handler.
  • Flux Discovery Client [0183] What is flux discovery client? [0184] Radiologists or clinicians interact with the PACS viewer from the enterprise web application. The Viewer requires a server to render the images of a study and identifying the best available server to do the task is key. The task of the “Flux Discovery Client” involves: [0185] Providing data of Fluxes which have been assigned to User.
  • Session management involves key functions: [0194] Authentication: It confirms the request's identity. Following authentication, the request is given access to particular system resources and features. [0195] Session Creation: When a user starts a session, a special session identifier is created and assigned to it. This identification is often a session token or cookie. On the server, several sessions are distinguished from one another using this identification.
  • Session Tracking The server must keep track of each request's session, tying it to the request's identity, and saving pertinent session information. Any information that must be available during the session can be added to this data. [0197] How this works in Image Manager. [0198] Session Creation: The login process begins when a user accesses the application's login page and enters their username and password. These credentials are sent to the Application Server through a Reverse Proxy server, which acts as a secure middleman. [0199] The Web Application Server receives the credentials and checks them against the data stored in the Application Database. [0200] If the credentials are invalid, the server returns an "Authentication Failure" message to the user.
  • the server If the credentials are valid, the server generates a unique session ID to track the user's session securely. [0202] This session ID is stored in the Session Database, linked to the user's account. Then, the server fetches the user's data from the database. [0203] The retrieved data is passed through the Reverse Proxy server and sent to the client-side Application page. The Application page displays the data, showing the user their tasks or relevant information. The user's browser stores this data, allowing them to interact with and use the application. With the session ID in place, the user is now logged in and can continue using the application's features securely.
  • Session initialization in imaging workflow After the user logs into the web application, the FluxDiscoveryClient, running as a worker in the browser, retrieves the list of available Flux servers and DIRC URLs. It then sends a ping request to each DIRC, including the session ID, to verify connectivity and session status. Upon receiving the ping request, the DIRC checks Redis for the session ID—if the session is new, it stores it ⁇ in Redis with an expiry time; if it already exists, the expiry is refreshed to maintain session validity. The worker continues sending ping requests every five minutes to ensure the session remains active.
  • Session authentication in image workflow When a user initiates an image viewing or study request through the Viewer, the request is sent to the Distributed Image Request Controller (DIRC) along with the session ID for authentication. Upon receiving the request, the DIRC queries the Redis database to verify whether the provided session ID exists and is valid. If the session ID is found in Redis, indicating an active and authenticated session, the DIRC proceeds to route the request based on its type: study related requests are forwarded to the es6-iserver, which handles study data retrieval, while image related requests are directed to WADO or the child processes responsible for image rendering and transformations. This ensures that only authenticated users can access study and imaging workflows.
  • DIRC Distributed Image Request Controller
  • the DIRC Since these pings serve as a mechanism to refresh session validity, the absence of a ping indicates that the user is no longer actively engaged with the system. [0208] Once the DIRC detects that a session ID has not been updated within the predefined expiry interval, it allows Redis to automatically remove the session ID from its database. This automatic cleanup ensures that inactive or abandoned sessions do not persist indefinitely, maintaining efficient resource utilization and enhancing system security by preventing unauthorized reuse of expired sessions. [0209] After the session is removed from Redis, any future study or image-related requests originating from the Viewer will fail authentication, as the DIRC will no longer recognize the session ID. When an unauthenticated request is received, the DIRC immediately rejects it, returning an authentication failure response to the Viewer.
  • OAuth 2.0 authentication [0211] What is OAuth 2.0 authentication? [0212] The es6-iServer communicates with the web server for requesting for some application data stored in the database. Session authentication cannot be used as the service is common for all the user and implementing the session authentication would be very complex. In such case, leaving the communication without authentication will lead to risk. Hence the es6-iServer has been implemented with OAuth 2.0 authentication to make the communication with web application server via an authenticated framework.
  • OAuth is an open standard used for authorization; i.e. to grant access to functionality/data/etc.
  • Client ID and Secret key Storage [0221] - The obtained client ID and secret key are stored in a master database. [0222] - The storage is associated with the package issued to a particular customer. [0223] Time-to-Live (TTL) for Individual Packages [0224] - The TTL for each individual package is hard-coded within the OAuth server. [0225] - This TTL determines the validity period of an OAuth token. [0226] Token Generation Process [0227] - When initiating a channel, the system will obtain the channel ID and relevant parent [0228] package information from the channel configuration. ⁇ [0229] - The parent package information will contain the associated client ID and secret key.
  • the system will invoke the token generation [0231] API to acquire an OAuth token.
  • Token Management [0233] - If an OAuth token does not exist for the requested channel, the system will create a new token.
  • the generated token will be stored in the master database and also in a Redis cache.
  • the system will respond to the API request with the generated token.
  • [0236] If an OAuth token already exists, the system will check its validity.
  • [0237] If the token is expired, the system will generate a new token and respond.
  • Disk IO Hanlder If the token is still valid, the system will return the existing token.
  • TCP The inter process communication between Disk IO Hanlder and the wado processs is TCP based. Hence the service is implemented using TCP listener.
  • the disk IO handler supports different types of deployments, Dedicated Disk IO Handler - In this setup, the es6-iServer running in a server configured to a dedicated disk where the dicom images are stored and retrieved. Hence the disk IO handler control on read permission is applied for the resource management pertaining to the particular server machine where es6-iServer is running.
  • the architecture introduces an intelligent routing mechanism to offload trivial tasks such as image normalization, window level adjustments and enhancement to a GPU-based processing stream, particularly for large-scale modalities such as ⁇ mammography, while reserving CPU execution for low-latency, time-sensitive rendering tasks.
  • a centralized memory orchestration layer coordinates zero-copy data exchange between compute agents, enabling high-throughput processing of diagnostic imaging data. The system guarantees operational continuity when GPU is not available on the hardware on the other hand it supports set-based asynchronous parallelism for optimal scalability across large study volumes.
  • Medical imaging systems such as PACS and diagnostic platforms increasingly require rapid and scalable processing of high-resolution imaging data.
  • MG Mammography
  • US Ultrasound
  • MRI Magnetic Resonance Imaging
  • CPU-only architecture suffers pipeline processing and achieving performance under concurrent workload. This leads to latency in image rendering and creates bottlenecks in viewing applications, inefficiencies in clinical workflows.
  • Existing image rendering frameworks are typically designed for homogeneous processing models. It either lacks support for GPU acceleration or it relies on very rigid GPU dependencies which causes unbalenced utilization of available hardware of CPU- GPU combination.
  • Traditional systems do not offer intelligent task partitioning based on workload, resulting in sub-optimal resource utilization and poor scalability.
  • Imaging Request Orchestrator [0254] Receives inbound study requests via DICOM protocols or PACS integrations. It extracts metadata such as modality type, study size, and priority class for task scheduling.
  • Adaptive Imaging Orchestrator AIO: [0256] Maintains stateful metadata about system resources, dispatch decisions, and task completion.
  • Intelligent Compute Dispatcher [0258] Determines routing logic between CPU and GPU engines based on real-time CPU load, image resolution class, and latency sensitivity. Tasks are scored and assigned accordingly.
  • GPU Transformation Engine [0260] Executes computationally intensive image operations using parallel GPU processing. Supports batch transformation, multi-stream pipeline execution, and hardware-accelerated encoding (e.g., PNG conversion).
  • CPU Rendering Engine (CRE): [0262] Serves low-latency diagnostic previews and also acts as the fallback mechanism for non-GPU systems. Uses multi-threaded execution models to accelerate performance when GPU is unavailable or overcommitted.
  • Centralized Image Repository [0264] A shared, zero-copy accessible memory buffer pool that stores rendered and transformed images. Enables synchronization between compute agents and decouples processing from delivery layers. [0265] Asynchronous Batch Manager (ABM): ⁇ [0266] Manages sets of image transformation tasks grouped by modality or study. Distributes workloads across asynchronous GPU streams to improve utilization and reduce queuing overhead. [0267] Routing and Execution Logic [0268] For high-priority preview slices (e.g., initial CT/XR images), the dispatcher routes to CPU pipelines to reduce load time.
  • the viewport images in the study is always sent to the CPU for initial rendering, regardless of system load or modality type. This ensures that the viewport images is immediately available for preview, even if the system is under heavy load or the GPU is occupied. The remaining slices in the study are sent to the GPU engine for batch processing. [0270] If GPU hardware is not detected or exceeds load thresholds, the fallback engine executes all tasks using CPU-based primitives, including the viewport images, with redundancy control to maintain workflow continuity. [0271] Zero-copy architecture via unified memory mapping for CPU-GPU co-access. [0272] Dynamic memory allocation scaled to study size, resolution, and processing stage.
  • CIR memory pages support in-place image transformation and format conversion.
  • Fault Tolerance [0275] All GPU tasks are redundantly registered with CPU fallback paths.
  • CIR acts as persistent buffer for interrupted image jobs.
  • Background validation engine verifies task integrity during and after processing.
  • Use Cases [0279] PACS servers process over 100 concurrent studies per hour, needing parallel transformation and compression.
  • Diagnostic viewer in mobile or resource-constrained environments defaulting to CPU-only rendering. [0281] It must also be noted that, as used in the specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise.
  • Ranges may be expressed herein as from “about” or “ 5 approximately” one particular value and/or to “about” or “approximately” another ⁇ particular value. When such a range is expressed, other exemplary embodiments include one particular value and/or the other particular value.
  • “comprising” or “containing” or “including,” is meant that at least the name compound, element, particle, or method step is present in the composition or article or method but does not exclude the presence of other compounds, materials, particles, method steps, even if the other such compounds, material, particles, method steps have the same function as what is named.
  • terminology will be resorted to for the sake of clarity.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

A web viewer application is configured to send viewer requests and receive messages via a reverse proxy server configured to receive and route the viewer requests to an image web server based on the request's proxy configurations. An image web server is configured to distribute the viewer requests to image manager servers, and image manager servers are configured to receive the viewer requests and generate child processes to prepare and render images. A database is configured to store the images, store information of the viewer requests and the child processes, and store reserved memory information of the system. A file I/O handler configured to manage the viewer requests to access the database using a waiting queue.

Description

^ MEDICAL IMAGE MANAGEMENT WITH CHANNEL SELECTION AND LOAD BALANCING Cross Reference to Related Applications [0001] This Application claims priority to and incorporates entirely by reference United States Patent Application Serial No.63/663,470 filed on June 24, 2024, and entitled Medical Image Management with Channel Selection and Load Balancing. Background [0002] The processing of the medical images by a Picture Archiving and Communication System (PACS) may use extensive functions and algorithms to maintain data security for data access, transfer, etc., and to handle images with different characteristics. The PACS viewing also requires speed and high quality to serve the patient quickly with the diagnosis report. [0003] Previous medical image management systems have attempted to reduce the security risks and costs in PACS viewing at high speed and quality. However, many of these systems operate with server-side rendering, and the previous image management systems may not address inherent problems that current systems have with request/response handling, concurrency control, study level processing, request distribution, disk I/O control, memory management, load balancing, specific rendering, prior studies rendering, and on-demand processing. [0004] There would be a benefit, therefore, to developing a system and method for a medical image management system that includes more efficient data processing and more options for meeting client specific demands in a secure environment. Summary [0005] An exemplary system is disclosed having a web viewer application configured to send viewer requests and receive messages, a reverse proxy server configured to receive and route the viewer requests to an image web server based on the request’s proxy configurations, an image web server configured to distribute the viewer requests to image manager servers, image manager servers configured to receive the viewer requests and generate child processes to prepare and render images, a database configured to store the images, store information of the viewer requests and the child processes, and store ^ reserved memory information of the system, and a file I/O handler configured to manage the viewer requests to access the database using a waiting queue. [0006] In some embodiments, the exemplary system includes an edge-computing architecture with flex-pod and flex-central to improve performance. [0007] In some embodiments, the exemplary system may have an image preparation process that receives images from the image manager servers (i.e., es6-iServer), stores the images in a filestore in a decompressed format, generates a child process to prepare a list of images in the decompressed format from the filestore, and loads the list of images into an intermediate memory. [0008] In some embodiments, the exemplary system may have an image rendering process that receives a requested image format from a viewer request, converts the images in the intermediate memory from the decompressed format to the requested format, and transfers the converted images to a web viewer for displaying. [0009] In some embodiments, the exemplary system may have a memory management process that assesses an available memory of the system, receives an image information in a viewer request for an image, receives a filesize of the image stored in a database, combines the filesize and a memory consumption of other processes to get a combined memory consumption, and compares the combined memory consumption with the available memory of the system. [0010] In some embodiments, the exemplary system may have an authentication process that sends a viewer request from a web viewer to an image web server, forwards the viewer request from the image web server to an image manager server using a session identifier (ID), and authenticates the session ID using an application programming interface (API). Brief Description of the Drawings [0011] FIG.1 shows an embodiment of a medical image management system configured with an image web server (i.e., Distributed Image Requests Controller - DiRC), an es6-iServer (i.e., image manager) having several channels, a file I/O handler, and PACS archive having several filestores. [0012] FIG.2 shows an example DiRC configured to distribute viewer requests to es6-iServer servers (i.e., image managers). ^ [0013] FIG.3 shows an example es6-iServer channel configured to include a channel’s HTTP listener (i.e., channel level upstream, configuration upstream) and receive viewer requests. [0014] FIG.4 shows an example DiRC distributing a viewer request to a corresponding es6-iServer channel via the channel’s HTTP listener (i.e., configuration upstream). [0015] FIG.5 shows an example DiRC sending a server’s response (e.g., medical image) from an es6-iServer channel, via the wado process’s HTTP listener (i.e., transaction upstream), back to the web viewer. [0016] FIG.6 shows an example operation of HTTP listeners in the exemplary medical image management when there is a wado process HTTP listener in the DiRC database that matches the viewer request’s parameters. [0017] FIG.7A shows an example file I/O handler managing various file read or write requests from wado processes using a waiting queue. [0018] FIG.7B shows various limit configurations of the wado processes and file I/O handler in the exemplary system. [0019] FIG.8A – 8C show the first stage of the memory management process of the exemplary system. Specifically, FIG.8A shows a process of calculating and registering the filesize information of a medical image in the DiRC database. [0020] FIG.8B shows a process of retrieving the filesize information of a medical image from the DiRC database and returning the filesize information to the web viewer. [0021] FIG.8C shows a process of retrieving the required memory (filesize) from the viewer request’s parameters and retrieving the memory threshold limit from the DiRC database. [0022] FIG 8D shows the second stage of the memory management process of the exemplary system. [0023] FIG.9 shows an embodiment of the medical image management system configured to have an edge-computing architecture (e.g., Enterprise-Flux). [0024] FIG.10 shows an embodiment of a flux-pod and a flux-central in the edge- computing architecture. [0025] FIG.11A – B shows the two stages of an example session management process in an embodiment of the medical image management system. [0026] FIG.12A – 12B shows an example OAuth authentication operation in an embodiment of the medical image management system. ^ [0027] FIGS.13-31 are flow charts for the methods described below. Detailed Description [0028] Example es6-iServer: [0029] In a medical image management system, the patient’s medical Digital Imaging and Communications in Medicine (DICOM) images are obtained from a medical imager and stored in the PACS Archive. With the es6-iServer, a web PACS may be used to view medical images. The es6-iServer is merely an abbreviation for any server that handles images according to this disclosure and may include numerous kinds of updated server modules that can be used to achieve the goals of this disclosure for medical images. In non-limiting examples, numerous es6 Javascript updates may be within the scope of this disclosure. [0030] An es6-iServer deploys and runs on multiple platforms (e.g., Linux, Windows). The es6-iServer supports Edge Computing, Distributed Image Computing, Virtual Private Rendering “Channels” (ViPRC), and it works with Distributed Image Request Channels (DiRC) and more. The es6-iServer may render DICOM images acquired from multiple imaging modalities with high quality and speed to support an imaging web viewer. [0031] The es6-iServer provides 100% server-side rendering of medical images and transmits the image data that is processed for display in the client browser (Web Viewer). In order to render the medical images, the Web Viewer running in the client web browser does not require any installation of additional software on the client PC. [0032] Some methods and systems supported by the es6-iServer includes es6-iServer channels, DiRC, route handler, HTTP listener, single server image preparation and distributed server image preparation, diagnostic image rendering and non-diagnostic image rendering, memory management, file IO handler, edge computing, session management, OAuth 2.0 authentication, etc. [0033] es6-iServer channels. Es6-iServer channel is a method of software distribution that involves a subscription model for accessing the executable/service. The channel method may be used in both on-premises and cloud deployments under a proprietary subscription model. In the case of on-premises deployment, the subscribed package/executable is installed in the provided servers and the channel is configured based on customer requirement. In the case of cloud deployment, the package/executable may be virtually shared by multiple customers by creating individual channels per their ^ requirements and need. The es6-iServer channels are supported by both TCP and HTTP services. [0034] Each channel may subscribe to their individual set of configurations and validations, so customers may subscribe and pay based on their usage and demand. For vendors, the channels permits better control over the customer licensing, revocation, controlled access to their data, usage, etc. The ‘pay for use’ model is the main idea to bring the channel concept into PACS services. Additionally, a change made in configuration for particular customers’ channel doesn’t affect other customers. [0035] The channels for an image manager (i.e., es6-iServer) follow HTTP/HTTPS protocols for communication. In some embodiments, the channels are created and managed by customer, modality, bodypart, user, and timed channels for radiologist. [0036] Fig.1 shows the exemplary system configured with an es6-iServer having multiple channels to support request distribution from the DiRC. [0037] Features of es6-iServer channels. In some embodiments, the es6-iServer may run several channels on a single shared server for all or for a particular customer, modality, bodypart, radiologist, or chosen group. Therefore, one server may save resources and serve numerous customers simultaneously. [0038] In some embodiments, the es6-iServer may be configured as a customer-based dedicated server to provide customers with increased network performance and faster service. One or many channels are created and assigned to the dedicated server, and a channel only serves the requests to a particular customer. [0039] Image rendering on the server side is either memory, CPU, or IO intensive depending on the file size or number of images. As a result, the es6-iServer channels running in servers should serve specific modality and bodyparts while considering the hardware specifications of the server. [0040] Example Distributed Image Requests Controller: [0041] In the exemplary medical image management system, a Distributed Image Requests Controller (DiRC) functions as an integrated load balancer for the image managers, which renders images in the exemplary medical image management system. The DiRC manages the es6-iserver request routing capabilities and handles fallback method, proxy server, and smooth request distribution. Additionally, it may respond to cross-origin requests. [0042] In the exemplary medical image management system, DiRC may distribute requests in various ways: ^ 1. By a specific customer, for all modalities and body parts. 2. By specific customer, for various specific modalities and body parts. 3. By multiple customers sharing, for all modalities and body parts. 4. By multiple customers sharing, for various specific modalities and body parts. 5. By a specific or a select group of radiologists. [0043] The DiRC also helps in Queue Management of request/response across all the es6-iServer channels configured in its integrated database. Fig.2 shows an example DiRC in an embodiment of the medical image management system where DiRC distributes various requests to the corresponding es6-iServer channels. [0044] Operations of DiRC. In the exemplary medical image management system, DiRC classifies a viewer request based on parameters and routes the request to the corresponding image manager to process the request. The classification parameters may include customer, modality, bodypart, single or distributed mode, user, dedicated channel and so on. The DiRC keeps different classification mappings relevant to distribution in its integrated database. The target of each classification is one of the es6-iServer channels that may be identified by the IP address and specific port. A channel level upstream (i.e., configuration upstream) is created when an es6-iServer channel is created and started. Fig.3 shows an example es6-iServer channel configured to include a HTTP listener (i.e., channel level upstream, configuration upstream) and receive viewer requests. [0045] A HTTP request will be initiated from the web viewer along with various classification parameters. According to the classification parameters in the request, the DiRC will reroute and distribute those viewer requests to corresponding es6-iServer channels. Fig.4 shows an example DiRC distributing a viewer request to a corresponding es6-iServer channel via the channel’s HTTP listener (e.g., configuration upstream). [0046] The image manager, in turn, will retrieve the relevant images from the image store and deliver them back to the web viewer. DiRC will track the distributed requests for subsequent re-routing as transaction upstreams. Fig.5 shows an example DiRC sending a server’s response (e.g., medical image) from an es6-iServer channel, via the wado (wireless access of dicom images) process’s HTTP listener (e.g., transaction upstream), back to the web viewer. [0047] Example HTTP listener [0048] In the exemplary medical image management system, the es6-iServer is a HTTP based web server that handles the HTTP requests from the web viewer. Hence, the es6-iServer channels include an HTTP listener program which starts listening to requests ^ in a dedicated port once the channel is started. The HTTP listeners support both HTTP and HTTPS communications. [0049] HTTP listener is an inbound end point. An es6-iServer channel, after confirming the prerequisites of the viewer requests, creates a HTTP-based wado process to prepare and serve the images to the web viewer. After being created, the wado process also starts running as a separate HTTP listener process in a specific port. [0050] There are several types of HTTP listeners, including es6-iServer channel’s HTTP listener and wado process’s HTTP listener. When an es6-iServer channel is created, an HTTP listener is registered in DiRC database as configuration upstream. Fig. 4 further shows an example es6-iServer channel’s HTTP listener (i.e., configuration upstream) and its information that is registered in the DiRC database. [0051] Similarly, the transaction upstream, another example of HTTP listener, are also registered in DiRC database. The transaction upstream is a HTTP listener generated by the wado process under the es6-iServer channel. Fig.5 further shows an example wado process’s HTTP listener (i.e., transaction upstream) and its information that is registered in the DiRC database. [0052] Operations of HTTP listeners. When the web viewer makes an HTTP request for initial loading, the reverse proxy server receives and routes the request to the DiRC based on the request’s proxy configuration. The DiRC receives the request and checks if there is a wado process HTTP listener (i.e., transaction upstream) that matches the request’s parameters. Fig.6 shows an example operation of HTTP listeners in the exemplary medical image management when there is a wado process HTTP listener in the DiRC database that matches the viewer request’s parameters. [0053] When there are no transaction upstreams matching the request’s parameters, the DiRC then checks for the available es6-iServer channel’s HTTP listeners (i.e., configuration upstreams) that match the request’s parameters. The DiRC then routes the request to the selected HTTP listener (upstream) and waits for the response from the corresponding process. The es6-iServer channel’s HTTP listeners receive the HTTP request and make a corresponding internal processing request to create the wado process. The es6-iServer checks for port availability, starts a wado process, and assigns a port to the wado process. The wado HTTP listeners are then registered in the DiRC database. When the processing is completed, the server’s response is sent back as HTTP-based callback response to the web viewer through the same communication system. ^ [0054] Further web viewer requests may target the wado process HTTP listeners (i.e., transaction upstream), which are already registered in the DiRC database. The wado process may serve independently without depending on the parent es6-iServer channel. By this way, the inter process communication (IPC) and its complexity may be nullified. Similar to the es6-iServer channel, the wado level HTTP listener receives the request, processes the request, and sends the response back through the same communication system. [0055] Example route handler function: [0056] Each viewer request in the exemplary medical image management system includes a route handler function in order to go through the reverse proxy server and reach a corresponding HTTP listener of es6-iServer channel or wado process. There are several types of route handler functions, including es6-iServer route handler function and wado process route handler function. [0057] Operations of route handler function. The web viewer includes a route handler function in the request and sends the request to the exemplary medical image management system. The request goes through the reverse proxy server and DiRC to reach the HTTP listener of es6-iServer channel or wado process using the route handler function. The HTTP listeners of es6-iServer channel and wado process maintain their respective route definitions. When the viewer request’s route handler function matches a route definition of the HTTP listener, the requested function is executed, and the response is returned. [0058] Example image preparation process: [0059] The exemplary medical image management system has an image preparation process to prepare the server response (e.g., medical image) for the web viewer. [0060] For the image to be rendered by the wado process in the server side, first the decompressed raw DICOM (medical) image data should be read from the filestore and loaded into the server physical memory (RAM). After the medical image data is loaded into the physical memory, the medical image may be processed, and response output may be generated. [0061] Image preparation process. The es6-iServer channel first receives the initial loading request or image preparation request. The request calls the image preparation route and is routed to a corresponding es6-iServer channel. The es6-iServer channel creates the wado process, which prepares and renders the image based on the viewer request. ^ [0062] The image preparation process starts when the medical images are received from the es6-iServer and stored in the filestore in a decompressed format. The wado process, when created, makes a HTTP request to the server to get the corresponding study_object, which is available in the database. Using the study_object information, the wado process prepares a list of medical images to be read from the filestore and load them into memory. [0063] Generally, the wado process constructs HTTP request for each file information and calls the file IO handler. The file IO handler reads the target file and streams the response to the wado (wireless access to dicom). The wado process loads the streamed response into the memory. This process is image preparation. [0064] Image preparation stack. The image preparation process generally deploys a stack structure that is developed by an internal algorithm. A stack is the representation object that is the partition of a series of medical images either at an instance level or a series level. [0065] Example image preparation method #1 – single server image preparation. Medical image types, including MG, CR, and US, may have multiple series with fewer images and do not use much of DISK I/O, so image preparation and rendering process may be done in a single server environment. The medical images of types mammogram (MG), computed radiography (CR), and ultrasound (US) are generally partitioned in stack format at instance level and are obtained when the wado process is created in a single server and image preparation is performed within a single process. [0066] In a single server mode, the image manager handles image preparation at three different levels, including patient, study, and series. The single server mode image preparation and the web viewer is synchronized with stack. The web viewer may set the server mode as single server depending on the multi-frame image types (e.g., MG, CR, US). 1. Study level: When receiving the mode as single server in ‘viewer image preparation request’, the es6-iServer creates one wado process and prepares the entire images in the single process. 2. Patient level: The web viewer may request the single server mode for the medical image types where the stack is generated at the instance level. The es6-iServer then initiates a single wado process at the patient level and prepares the images for processing in the single process. ^ 3. Series level: The viewer may request the single server mode for the medical image types where the stack is generated at the series level. The es6-iServer then creates the wado process for each series in a single server, which allows for series-based parallel image preparation. [0067] Example image preparation method #2 – distributed server image preparation. [0068] Some medical image types (e.g., CT and MRI) may arrive in multiple series with a great number of images. Rendering all the images on a single server may result in slowdown due to the large usage of DISK I/O. [0069] Another embodiment of image preparation is distributed server image preparation. As a result, rendering all the series of a study on a single server will result in overhead and a slowdown due to usage of more DISK I/O which is directly proportionate to number of images. In this image preparation process, wado processes are created for each series in distributed environment, which enables parallel image preparation across multiple servers and wado processes. [0070] The system and method of rendering are conducted in stack units and the image manager does the processing at the stack level, which enables optimal utilization of available server resources. Because of the stack's parallel processing in distributed servers, the web viewer may display the medical images with high speed and high quality. [0071] In some use cases, loading the entire series level medical images may not be required. Hence, the exemplary system may prepare a specific range of images based on some rules. By this way, image manager saves system resources and serves additional requests in quick time. [0072] Example image rendering process: [0073] After the image preparation process, the raw DICOM image is loaded into the memory. The raw DICOM image data should be processed based on the viewer request, then should be rendered and transferred as a viewer displayable format (e.g., PNG, JPEG, avif) to the web viewer for displaying. This processing of raw DICOM file data to browser displayable file format is image rendering. [0074] Features of image rendering. Table 1 shows various features of the image rendering process. ^ Table 1 [0075] Table 2 shows several image types that may need image rendering process. Table 2 [0076] Depending on the users and use-cases, there may be several types of image rendering process, including diagnostic image rendering and non-diagnostic image rendering. [0077] Diagnostic image rendering. In the exemplary medical image management system, the diagnostic image rendering is coupled with the distributed image preparation at the server-side so that a medical image may be processed and rendered at high quality and high speed. The diagnostic image rendering is supported by all functionalities shown in Table 1. [0078] Non-diagnostic image rendering. In some general use cases, medical images may not be rendered with diagnostic, so the image manager may render medical images in a non-diagnostic, medium to low quality format (e.g., avif). Additionally, non- diagnostic rendering further reduces the overhead for the image manager as it uses fewer resources than rendering diagnostic quality images. [0079] File I/O Handler ^ [0080] The es6-iServer may be receiving concurrent requests from multiple users, so several wado processes may be running in the same server in parallel. When the web viewer makes several read and write requests to the DISK I/O, the exemplary medical image management system needs a file I/O handler to manage the accesses to the DISK I/O. [0081] A File I/O handler manages the concurrent disk I/O request through proper queue management. The file I/O handler queues the disk I/O read and write requests and performs them in First Come First Serve (FCFS) manner depending on the available logical processor. In this way, bottlenecks may be reduced, and the medical images are quickly served to the users. Fig.1 shows an example of file I/O handler managing various disk I/O requests from the wado processes. [0082] Operations of file I/O handler. In the exemplary system, incoming requests are managed through its child wado processes. These wado processes, in turn, utilize concurrent threads for file read and write requests. The file I/O handler is responsible for efficiently handling these files read requests. [0083] File IO handler operates as a dedicated intermediary service. The file I/O handler receives incoming file read requests from one or more wado processes and manages these requests using a thread pool. The thread pool is a set of threads that the File IO handler may utilize to process these read requests concurrently. [0084] When a file read or write request arrives, the file IO handler places it in a queue within the thread pool in FCFS manner. As the available threads in the thread pool become free, they are assigned to process these queued requests. Each thread accesses the required file from the filestore, performs the reading operation, and generates an HTTP stream containing the requested image data. This HTTP stream is then sent back to the specific wado process that made the initial request. Fig.7A shows an example file I/O handler managing various file read or write requests from wado processes using a waiting queue. [0085] The File IO handler gathers and coordinates the incoming requests, utilizing the thread pool and queue mechanisms to ensure a structured and organized processing flow. In this way, the file I/O handler may boost the system performance and responsiveness. [0086] Features of file IO handler. A wado process created under any es6-iServer channel will be performing image computing, so it should read and load the image from ^ the filestore into the physical memory. In order for disk I/O to operate efficiently, the exemplary system may have some limit configurations: [0087] R1 = read process thread limit in the wado process. The wado process in a read function will maintain the list of files that have to be read. The wado process makes concurrent request with upper limit as R1. The R1 is a concurrent request limit for each wado process running in one or several es6-iServer channels. R1 value should be configured relative to the number of logical processors for better performance outcome. [0088] C = active connection limit in HTTP listener (thread pool size). The file I/O handler has an HTTP listener function which receives the HTTP-based read or write request from the wado process. The C is the number of active connections permitted by the file IO handler. When an active connection between the wado and file I/O handler exceeds the limit, the HTTP request may fail. The file IO handler manages the request queue in a thread pool, so the thread pool size of the file IO handler remains to be equal to C. [0089] R2 = thread limit of thread pool. The file IO handler’s thread pool has the processing thread limit configuration as R2. This feature of the File IO handler controls disk I/O operations. All the concurrent read requests from wado processes are queued in the thread pool of file IO handler and the disk I/O is performed by the threads under the thread pool. The thread limit should be configured depending on the number of logical processors, HDD/SSD read and write capacity, file size, etc. The formula to calculate the thread limit is defined per Equation 1: [0090] Fig.7B shows various limit configurations of the wado processes and file I/O handler in the exemplary system. By using the optimal thread limit configuration by File IO hander, the exemplary system may avoid contention, context switching overhead, and potential bottlenecks. [0091] es6-iServer memory management: [0092] Memory management within an es6-iServer package ensures optimal utilization of the available physical memory in the exemplary medical image management system. The process of memory management occurs in various stages throughout the request processing lifecycle, enabling efficient operation while maintaining a balance between memory reservation and usage. [0093] The memory management of es6-iServer reserves the memory for its processing at first level. When a new request is received, the required memory is compared with the reserved system memory. The request may proceed only when the ^ memory limit is satisfied. A request outside the threshold limit may be returned with prompt for memory shortage and it may try to access the exemplary system after some time. [0094] The memory management process ensures that the exemplary medical image management system optimally utilizes its physical memory. Memory resources are well allocated for various processing tasks, which helps to maintain system performance and prevent memory-related issues. [0095] Memory management process. The memory management process starts with the es6-iServer package being configured with a memory threshold. The memory threshold is a percentage of the total physical memory available in the exemplary system, which is stored in the DiRC database associated with the server. [0096] The first stage of memory management involves periodic checks initiated by an internal process within the es6-iServer. The periodic check runs at specific intervals and assesses the current availability of physical memory within the system. [0097] During this stage, the exemplary system involves interactions with the image receiver service. This service records the file sizes of medical images during their registration, storing this information in the database. Fig.8A shows a process of calculating and registering the filesize information of a medical image in the DiRC database. Fig.8B shows a process of retrieving the filesize information of a medical image from the DiRC database and returning the filesize information to the web viewer. Fig.8C shows a process of retrieving the required memory (filesize) from the viewer request’s parameters and retrieving the memory threshold limit from the DiRC database. [0098] The second stage of the memory management process starts when the es6- iServer calculates a total consumed memory, which includes the required memory by the viewer request and the memory reserved by the es6-iServer channels for other wado processes. When the total consumed memory exceeds the memory threshold limit, an insufficient error is returned to the web viewer. [0099] When the total consumed memory is below the memory threshold limit, the es6-iServer creates a new wado process (i.e., child process) with parameters (e.g., IP address, port number). The wado process runs in a dedicated port and updates the DiRC database with a new reserved memory. The update process (i.e., transaction upstream) is also registered in the database. Fig.8D shows the second stage of the memory management process. ^ [0100] Generally, the reserved memory is efficiently managed throughout the system's lifecycle. When a wado process is terminated, the reserved memory associated with it is properly released, and the DiRC database is updated accordingly. [0101] Edge computing architecture: [0102] In some embodiments, the exemplary medical management system may have an edge computing architecture, Enterprise-Flux, to improve its performance. The Enterprise-Flux architecture comprises a “Flux-Central” server and several ‘Flux-Pod’ servers that are distributed clusters of sub-servers pertaining to different sites/locations. Flux-Central may deliver end to end workflow operations by default. Fig.9 shows an embodiment of the exemplary medical image management system configured with edge computing architecture. [0103] The exemplary edge computing architecture includes the es6-iServer, which may be installed in "Flux-Central" or "Flux-Pods" to distribute a large volume of concurrent requests and deliver high-quality images on time. [0104] es6-iServer may be deployed in multiple Flux-Pod’s, which may scale up the exemplary system horizontally. The files required for rendering will be available in the es6-iServer’s local pods, so the exemplary system may deliver medical image at high speed by using proximity of data and better bandwidth availability. [0105] Operation of es6-iServer’s edge computing architecture. In the edge computing architecture, an image receiver (storage SCP channel) runs in the flux-pod server and receives a medical image from the medical imager. The image is stored in the file store in decompressed image format either in local drive or in a network drive. The image’s related metadata information pertaining to an organization is consolidated and synchronized to the Flux-Central using a pod-central synchronization service. The es6- iServer registers the metadata information of the consolidated image after two-stage consistency validation and registers the image in the database. Now the images are available in the application (i.e., web viewer) for access by users. [0106] The application (i.e., web viewer) server runs in Flux-Central. The users from the pod environment access the application and the application data from the database is accessed from Flux-Central. As the application data will be lesser sized, the transfer of the application data to the users will not have much latency. [0107] The web viewer may view the medical images rendered by the es6-iServer from the server-side. As the medical images are available in the pod environment’s local storage, the es6-iServer also runs in the pod environment. The web application includes a ^ pod discovery client, which keeps track of the image availability in the pod/Central. The pod discovery client shares the target pod server information to the web viewer. [0108] When the users request data from the application server, the web viewer triggers an image preparation request for a study/series directly to the pod environment server. The image request received by the DiRC service running in the pod environment. The DiRC then checks for the available es6-iServer channel and forwards the request. The es6-iServer channel creates a wado process and thereby the image rendering is performed from the pod. The wado process reads and loads the image file which may be stored in its local environment through the file IO handler. The file is available in its proximity, so the read latency is reduced and processed image is returned to the web viewer quickly. [0109] FIG.10 shows an embodiment of a flux-pod and a flux-central in the edge- computing architecture. [0110] As image computing is managed in an isolated pod environment, a user from one site may not be affected by application access by users from other sites. [0111] Session management: [0112] An interaction between users and the exemplary medical image management system is known as a session. A session begins when the user logs in or begins using the exemplary system and ends when they log out or the session expires due to inactivity. Session management involves the maintenance and control of user sessions. [0113] Session management includes some functions: 1. Authentication: Authentication function confirms the request's identity. After authentication, a request is given access to particular system resources and features. 2. Session creation: When a user starts a session, a session identifier (ID) is created and assigned to the session. The session ID may be a session token or cookie. On the server, several sessions are distinguished from one another using the session identifier. 3. Session Tracking: The server may keep track of each request's session, tying it to the request's identity, and saving session information. Any information available during the session may be added to this data. [0114] Session management process. A session management process may include two stages. [0115] The first stage of the session management process starts when web viewer sends study request to the image web server (DiRC) via reverse proxy server. This request generally contains a session ID that was created when the user logged in to the ^ application. DiRC identifies an image manager to forward the request along with the session ID. The image manager then calls an API in Flux Central to authenticate the session ID. If the authentication is successful, the image manager creates a new wado process for the session ID. If the authentication fails, then the image manager returns error as invalid session, back to the web viewer, so the viewer should re-log in. Fig.11A shows the first stage of the session management process. [0116] After the authentication in the first stage is successful, the second stage begins. The wado process holds the session ID in its internal memory and processes the request and responds back to the web viewer via image web server (i.e., DiRC). The DiRC then stores the session information as transaction_upstreams. Any further image processing requests from the web viewer with the same session ID will first get validated by the DiRC. If the DiRC validates and identifies the wado process for the session, it directly routes the processing requests to the wado process, skipping the Image manager. Wado process will process the requests and respond back to the web viewer via DiRC. If the DiRC validation does not find the wado process, then it returns as invalid session, back to Web Viewer. Fig.11B shows the second stage of the session management process. [0117] OAuth 2.0 authentication: [0118] In some embodiments, the es6-iServer may be implemented with OAuth 2.0 authentication to make communication with application server secured via an authenticated framework. [0119] OAuth is an open standard used for granting access to functionality, data, etc. without having to deal with the original authentication. OAuth allows a user to grant access to a client application to the user's protected resources, without revealing the user’s credentials. OAuth does this by granting the requesting client application a token, after user approves access. Each token grants limited access to specified resources for a specific period. [0120] Features of OAuth 2.0 authorization. Package registration in OAuth server: When a customer registers a new package, the system may check the customer for their client ID and secret key. When the client ID and secret key exist, the system will return the existing client ID and secret key. When the client ID and secret key do not exist, the system will create a new client ID and secret key for the customer. Client ID and secret key storage: The obtained client ID and secret key are stored in a master database. The storage is associated with the package issued to a particular customer. ^ Time-to-live (TTL) for individual packages: The TTL for each package is hard-coded within the OAuth server. This TTL determines the validity period of an OAuth token. Token generation process: When initiating a channel, the system will obtain the channel ID and relevant parent package information from the channel configuration. The parent package information will contain the associated client ID and secret key. Using the channel ID, client ID, and secret key, the system will invoke the token generation API to acquire an OAuth token. Token management: When an OAuth token does not exist for the requested channel, the system will create a new token. The generated token will be stored in the master database and also in a Redis cache. The system will respond to the API request with the generated token. When an OAuth token already exists, the system will check its validity. When the token is expired, the system will generate a new token and respond. When the token is still valid, the system will return the existing token. [0121] Client registration: Fig.12A shows the client registration process of the OAuth 2.0 Authorization. [0122] OAuth authentication: Fig.12B shows the OAuth authentication feature in the OAuth server. [0123] Dedicated Disk IO Handler - In this setup, the es6-iServer running in a server configured to a dedicated disk where the dicom images are stored and retrieved. Hence the disk IO handler control on read permission is applied for the resource management pertaining to the particular server machine where es6-iServer is running. [0124] Shared Disk IO Handler - In this setup, the disk io handler manages the read request received from es6-iServer running in more than one server machines and the dicom filestore is shared between multiple es6-Server. Here the disk IO handler maintain queue based on each server, and control on read permission is applied for the resource management pertaining to the particular server machine where es6-iServer is running. [0125] How File IO Handler works? [0126] In the es6-iServer architecture, incoming requests are managed through its child wado processes. These wado processes, in turn, utilize concurrent threads for file read requests. The pivotal component responsible for efficiently handling these file read requests is the File IO handler. The Disk I/O Controller is composed of several critical components to manage and optimize disk access across multiple servers: [0127] Server-Specific Request Queues: The Disk I/O Controller maintains a server- specific request queue for each ES6 server that is accessing the shared disk. Each queue ^ holds incoming disk read requests from the child processes running on that specific ES6 server. This ensures that requests are handled independently for each server. [0128] Server-Based Thread Pool: The Disk I/O Controller uses a server-based thread pool, where each server has a configured number of threads for handling disk read operations. The number of threads available for concurrent disk reads depends on the server’s available CPU cores and the configured threshold, ensuring that disk I/O operations are managed based on resource availability. [0129] Threshold Limiting: The Disk I/O Controller checks the number of active disk reads for each server. If a server exceeds its configured limit for concurrent disk reads (threshold), the new read requests from that server are queued and placed in the server’s specific queue until a thread becomes available. [0130] TCP Socket Communication: Child processes from multiple servers communicate with the Disk I/O Controller via TCP sockets. When a child process needs to read a file, it sends a request to the Disk I/O Controller over the socket. If the request is permitted, the child process begins reading the file directly from the disk. After the file is read successfully, the child process sends a confirmation message back to the Disk I/O Controller. [0131] Data Flow and Request Handling Process [0132] The process flow for handling disk read requests is as follows: [0133] Request Submission: When a child process from any of the ES6 servers needs to read a DICOM file, it sends a disk read request to the Disk I/O Controller over a TCP socket. The request includes the file identifier (DICOM file ID), the specific ES6 server’s ID, and other necessary metadata. [0134] Server-Specific Queueing: Upon receiving the request, the Disk I/O Controller first identifies the requesting server and checks how many disk read operations are currently being processed for that server. If the number of concurrent reads for the server is below its configured threshold, permission is granted immediately. If the threshold is reached, the request is added to the server-specific request queue, and the child process is instructed to wait. [0135] Permission Granting: If the server's active disk read count is below the threshold, the Disk I/O Controller grants permission to the requesting child process to read from the disk. This is communicated back to the child process over the TCP socket, allowing it to proceed with the read. ^ [0136] Direct Disk Read: After receiving permission, the child process directly accesses the shared disk and retrieves the requested DICOM file. The file is then loaded into memory for processing by the child process itself. [0137] Completion Notification: Once the child process has successfully loaded the DICOM file into memory, it sends a read completion notification back to the Disk I/O Controller via the same TCP socket. This indicates that the file has been successfully loaded and is ready for further processing. [0138] Thread Pool Management: After receiving the completion message, the Disk I/O Controller decreases the count of active threads for the requesting server. This allows the next request in the server-specific queue to proceed, granting the corresponding child process permission to read the next file. [0139] Queue Management: The Disk I/O Controller processes requests in the order they arrive in the server-specific request queue, maintaining fairness. Once the active thread count drops below the server’s threshold, the next request in the queue is granted permission to proceed. [0140] Concurrency Control: Server-Specific Resource Management [0141] The Disk I/O Controller effectively manages concurrency for disk reads through the following mechanisms: [0142] Server-Specific Request Queues: The controller handles disk read requests for each server separately. Each server has its own request queue, ensuring that requests are processed based on the resource utilization and availability specific to that server. This separation prevents the overload of a single server from affecting the others. [0143] Dynamic Thread Pool Management: The thread pool is configured based on the CPU cores of the server running the Disk I/O Controller. Each server can have a different number of concurrent disk read threads, allowing for flexibility in managing system resources. The thread pool ensures that the disk read process is efficiently managed, based on the server’s available resources. [0144] Threshold-Based Queueing: The Disk I/O Controller checks the number of active disk reads per server. If the number of active reads exceeds the threshold for a particular server, new requests from that server are queued. The system grants permission for disk reads on a first-come, first served basis, ensuring fairness in handling requests for each server. [0145] Key features of file IO handler: ^ [0146] The wado process created under any es6-iServer channel will be performing the image computing. Hence it has to read and load the image from the filestore into the physical memory. As we need to deal the Disk IO in a optimized manner we have to clearly plan on the concurrent thread limit that will be best for the system. Hence we have to consider the following, R1 = read process thread limit in the wado process. The wado process in read function will maintain the list of file that have to be read. The wado process will make concurrent request with upper limit as R1. The R1 will be concurrent request limit for each wado process running in one or multiple es6-iServer channels. R1 value should be configured relative to the number of logical processor for better performance outcome. [0147] R1 = Thread limit of Thread Pool in the File IO Handler. The File IO handler thread pool considers the processing (read) thread limit configuration as R1. This is the most critical feature of the File IO handler by which the es6-iServer system, Disk IO operation gets controlled. All the concurrent read request from wado processes are queued in the Thread pool of file IO handler and the disk read permissions is granted to the wado processes under the Thread limit. The Thread limit have to be configured according to the number of logical processor of the es6-iserver server, but safer side to consider the number of physical core should be best. Considering the thread limit by CPU core alone is not the efficient approach. Depending on the HDD/SSD read capacity vs the file size to be read by [0148] the thread, the thread limit has to be arrived. Hence the formula to calculate the Thread limit is as below, Thread Limit = Min(CPU Cores, Disk I/O Threads) [0149] Benefits of File IO Handler: [0150] The es6-iServer channels can be configured based on the image types such as CT, MR, MG etc., as we know the CT, MR modality standard file size are 512 kb. Hence for such modalities, in both HDD and SSD we can use the CPU core as thread limit. [0151] Even in the case of image types such as US, RF, BTO which are muliframe images, the files stored frame by frame. Each frame size might be lesser than 3 MB. Hence in such cases also the thread limit can be considered by number of CPU cores. [0152] In some image types of CR, MG the size of the image may be ranged from 10 MB to 150MB. In such cases, the thread limit have to be based on the disk io threads rather than cpu cores. By using the optimal thread limit configuration by File IO hander, we can avoid contention, context switching overhead, and potential bottlenecks in the system. ^ [0153] Memory Management [0154] Why Memory Management? [0155] Memory management within the es6-iServer package is a crucial aspect that ensures optimal utilization of the available physical memory in the system. The process of memory management occurs in distinct stages throughout the request processing lifecycle, enabling efficient operation while maintaining a balance between memory reservation and usage. The memory management of es6-iServer bring the idea of reserving the memory for its processing at first level. When new request is received, the required memory is checked with cumulative reserved memory, therefore permitted to proceed only when the memory threshold limit is satisfied. Hence the first come request is served first and the request received outside the threshold limit will be returned back with prompt for memory shortage and try to access after sometime. [0156] How Memory Management works? [0157] The memory management strategy starts with the es6-iServer package being configured with a predetermined memory threshold. This threshold is a percentage of the total physical memory available in the system. This configuration is stored in the DiRC database associated with the server. The first stage of memory management involves periodic checks initiated by an internal process within the es6-iServer. This process runs at specific intervals and assesses the current availability of physical memory within the system. It then updates this memory information within the DiRC database. Additionally, whenever a new request is received by the es6-iServer channel, it calculates the cumulative memory reserved by the wado processes and updates this value in the DiRC database. [0158] The first stage of memory management process also involves interactions with the image receiver service. This service records the file sizes of medical images during their registration, storing this information in the database. When a request is received by DiRC, it evaluates whether the sum of the required file size and the total used physical memory is below the predefined threshold limit. If this condition is not met, an error is returned to the web viewer. However, if sufficient memory is available, the request is forwarded to the es6-iServer for further processing. [0159] The second stage of memory control occurs during the study request assignment to the available wado processes by the es6-iServer channels for studies, series, or other elements. The es6-iServer examines the cumulative required memory for the wado process along with the memory reserved by the es6-iServer channels for other wado ^ processes. It checks whether this sum remains below the predefined memory threshold limit. Only when this condition is met does the es6-iServer proceed to create the wado process, updating the DiRC database with the newly reserved memory. Importantly, the reserved memory is efficiently managed throughout the system's lifecycle. [0160] When a wado process is terminated, the reserved memory associated with it is properly released, and the DiRC database is updated accordingly. [0161] Benefits: [0162] The comprehensive memory management approach presented here, with its two-stage evaluation of memory requirements and efficient allocation, ensures that the physical memory of the system is utilized optimally. This strategy guarantees that memory resources are allocated appropriately for various processing tasks, helping to maintain system performance while preventing memory-related issues. [0163] Edge Computing: [0164] What is edge computing? [0165] The Enterprise-Flux Architecture comprise a main server termed as ‘Flux- Cloud’ and multiple ‘Flux-Pod’ servers which are distributed clusters of sub-servers pertaining to different sites/locations. [0166] The Flux-Cloud can deliver end to end workflow operations by default. To achieve horizontal scalability - edge computing - higher performance, the ‘Flux-Pod’ server deployments can be engaged based on need. [0167] This enterprise solution includes the es6-iServer, which is intended to be installed in "Flux-Cloud" or multitudinous "Flux-Pods" so Edge computing allows distribution of large volume concurrent requests and deliver high-quality images on time. Based on the need es6-iServer may be deployed in multiple Flux-Pod’s and therefore makes the infrastructure capable to scaled up Horizontally. The files required for rendering will be available in its local POD’s and hence can enhance the delivery at most superior speed by use of proximity of data and better bandwidth availability. [0168] How es6-iServer edge computing works? [0169] The edge computing by es6-iServer is contributed when the POD environment is used by the customer. [0170] In traditional system used by big hospitals with multiple sites, the application will be deployed in a centralized datacenter or the cloud. The medical image data acquired from the medical imager also received, registered in database and image archived in the centralized location/datacenter/cloud, inorder to manage the application ^ from single place. The radiologist from a particular site/location request the application to view the patient medical image. The medical image processed on the server side of the centralized location and transferred through the web. In such case, the Radiologist experience slow display of images due to higher network latency. Also the centralized server has to serve for all the site, hence the request overhead result in reduced performance and affects all the users accessing the application concurrently. [0171] In the new system, the above challenges have been addressed to the finest by introducing the Flux-POD environment, thereby permitting the Edge Computing. Let us see how the Edge Computing works. The image receiver (Storage SCP channel) running in the flux-pod server, receives the medical image from the medical imager. The images are initially stored in a local edge file system, often in a decompressed or cache-optimized format. This supports image rendering from the local and the readiness for the Image Viewing workflow is Enhanced. [0172] Upon successful reception, metadata is immediately extracted and registered in the local flux database. This includes: [0173] Patient Demographics (e.g., Name, DOB, Gender, Account Number) [0174] Study Information (Study Instance UID, Study Date, Modality) [0175] Series and SOP Instance Details [0176] This local handling allows for fast reception and stream image rendering by the image service (es6-iserver) from its local (proximity) server that gives the advantage for faster response to the Viewer. [0177] The patient consistency check is performed as a first stage of cleansing the received data at the source and then upon the identification of the correct patient, the study consolidation process is performed at the edge. As a result of the process, the study object structures are prepared and migrated the information to the enterprise through API, therefore the study is registered in the enterprise database. Now the study visibility gets to enterprise level, and can be accessed by radiologist through the application. [0178] The application server is run in the enterprise. The radiologist from the POD environment access the application, application data from the enterprise and its database. As the application data will be lesser sized, the transfer of the application data will not have much latency while serving to the radiologist or other user. [0179] The web viewer will be used by Radiologist to view the medical images which has to be rendered by the es6-iServer from the server-side. As the medical images are available in the POD environment local storage, the es6-iServer also run in the POD ^ environment. The web application included with a POD discovery client, which keeps the track of the image availability in the POD/Cloud. [0180] The POD discovery client share the target POD server information to the web viewer. For the application data, request to the application server. The Web Viewer triggers the Image study preparation request for a study/series directly to the POD environment server. The image study preparation request received by the DiRC service that running in the POD environment. The DiRC then check for the available es6-iServer channel and forward the request. The es6-iServer channel creates assigns the study request to available wado process and thereby the image rendering is performed from the POD. The wado process read and load the image file which has been stored in its local environment under the control of the File IO handler. Here the file is available in its proximity and therefore the read latency is reduced and the processed image is returned to the web viewer in quickly. [0181] As the image computing is managed in an isolated POD environment, the radiologist are not affected/impacted by application access by radiologist of other site/location and vice-versa. [0182] Flux Discovery Client: [0183] What is flux discovery client? [0184] Radiologists or clinicians interact with the PACS viewer from the enterprise web application. The Viewer requires a server to render the images of a study and identifying the best available server to do the task is key. The task of the “Flux Discovery Client” involves: [0185] Providing data of Fluxes which have been assigned to User. [0186] Usage of Geolocation to identify the Fluxes at nearest proximity. [0187] Dedicating Flux-servers for User. (exclusive and time-bound access) [0188] ES6 Channel properties for rendering at corresponding Fluxes. [0189] Besides, it also validates the live-availability of these Fluxes and shares the status. The Viewer uses the above data and the study-availability at Fluxes to make the decision of routing image rendering requests to appropriate Flux. [0190] Session Management [0191] What is Session Management? [0192] The maintenance and control of user sessions are the fundamental goals of session management, which is a vital component of developing websites and applications. An interaction between a user and a system, such as a web application, is referred to as a ^ session. A session begins when the user logs in or begins using the system and ends when they log out or the session expires due to inactivity. [0193] Session management involves key functions: [0194] Authentication: It confirms the request's identity. Following authentication, the request is given access to particular system resources and features. [0195] Session Creation: When a user starts a session, a special session identifier is created and assigned to it. This identification is often a session token or cookie. On the server, several sessions are distinguished from one another using this identification. [0196] Session Tracking: The server must keep track of each request's session, tying it to the request's identity, and saving pertinent session information. Any information that must be available during the session can be added to this data. [0197] How this works in Image Manager. [0198] Session Creation: The login process begins when a user accesses the application's login page and enters their username and password. These credentials are sent to the Application Server through a Reverse Proxy server, which acts as a secure middleman. [0199] The Web Application Server receives the credentials and checks them against the data stored in the Application Database. [0200] If the credentials are invalid, the server returns an "Authentication Failure" message to the user. [0201] If the credentials are valid, the server generates a unique session ID to track the user's session securely. [0202] This session ID is stored in the Session Database, linked to the user's account. Then, the server fetches the user's data from the database. [0203] The retrieved data is passed through the Reverse Proxy server and sent to the client-side Application page. The Application page displays the data, showing the user their tasks or relevant information. The user's browser stores this data, allowing them to interact with and use the application. With the session ID in place, the user is now logged in and can continue using the application's features securely. [0204] Session initialization in imaging workflow: After the user logs into the web application, the FluxDiscoveryClient, running as a worker in the browser, retrieves the list of available Flux servers and DIRC URLs. It then sends a ping request to each DIRC, including the session ID, to verify connectivity and session status. Upon receiving the ping request, the DIRC checks Redis for the session ID—if the session is new, it stores it ^ in Redis with an expiry time; if it already exists, the expiry is refreshed to maintain session validity. The worker continues sending ping requests every five minutes to ensure the session remains active. [0205] Session authentication in image workflow: When a user initiates an image viewing or study request through the Viewer, the request is sent to the Distributed Image Request Controller (DIRC) along with the session ID for authentication. Upon receiving the request, the DIRC queries the Redis database to verify whether the provided session ID exists and is valid. If the session ID is found in Redis, indicating an active and authenticated session, the DIRC proceeds to route the request based on its type: study related requests are forwarded to the es6-iserver, which handles study data retrieval, while image related requests are directed to WADO or the child processes responsible for image rendering and transformations. This ensures that only authenticated users can access study and imaging workflows. [0206] However, if the session ID does not exist in Redis, it indicates that the session has either expired or is invalid. In such cases, the DIRC rejects the study or image request, returning an authentication failure response to the Viewer. This mechanism ensures that unauthorized access is prevented and that only valid, authenticated sessions can interact with the system. Additionally, by enforcing session validation at the DIRC level, the PACS system maintains secure and controlled access to medical imaging workflows, preventing unauthorized users from accessing patient studies or images. [0207] Session Expiry & Cleanup: When a user logs out of the web application, closes the browser, or the FluxDiscoveryClient is otherwise terminated, the client stops sending periodic ping requests to the DIRC. Since these pings serve as a mechanism to refresh session validity, the absence of a ping indicates that the user is no longer actively engaged with the system. [0208] Once the DIRC detects that a session ID has not been updated within the predefined expiry interval, it allows Redis to automatically remove the session ID from its database. This automatic cleanup ensures that inactive or abandoned sessions do not persist indefinitely, maintaining efficient resource utilization and enhancing system security by preventing unauthorized reuse of expired sessions. [0209] After the session is removed from Redis, any future study or image-related requests originating from the Viewer will fail authentication, as the DIRC will no longer recognize the session ID. When an unauthenticated request is received, the DIRC immediately rejects it, returning an authentication failure response to the Viewer. This ^ ensures that only active, authenticated sessions can access study and image workflows, reinforcing secure access control within the PACS system. [0210] OAuth 2.0 authentication: [0211] What is OAuth 2.0 authentication? [0212] The es6-iServer communicates with the web server for requesting for some application data stored in the database. Session authentication cannot be used as the service is common for all the user and implementing the session authentication would be very complex. In such case, leaving the communication without authentication will lead to risk. Hence the es6-iServer has been implemented with OAuth 2.0 authentication to make the communication with web application server via an authenticated framework. [0213] OAuth is an open standard used for authorization; i.e. to grant access to functionality/data/etc. without having to deal with the original authentication. It allows a user to grant access to a client application to the user's protected resources, without revealing user credentials. OAuth does it this by granting the requesting client application a token, after user approves access. Each token grants limited access to specified resources for a specific period. [0214] How OAuth 2.0 authentication works? [0215] Package Registration in OAuth server: [0216] - When a customer registers a new package, the system will verify if a client ID and secret key [0217] have already been issued for that customer. [0218] - If they don't exist, the system will create a new client ID and secret key for the customer. [0219] - If they exist, the system will return the existing client ID and secret key. [0220] Client ID and Secret key Storage: [0221] - The obtained client ID and secret key are stored in a master database. [0222] - The storage is associated with the package issued to a particular customer. [0223] Time-to-Live (TTL) for Individual Packages [0224] - The TTL for each individual package is hard-coded within the OAuth server. [0225] - This TTL determines the validity period of an OAuth token. [0226] Token Generation Process [0227] - When initiating a channel, the system will obtain the channel ID and relevant parent [0228] package information from the channel configuration. ^ [0229] - The parent package information will contain the associated client ID and secret key. [0230] - Using the channel ID, client ID, and secret key, the system will invoke the token generation [0231] API to acquire an OAuth token. [0232] Token Management [0233] - If an OAuth token does not exist for the requested channel, the system will create a new token. [0234] - The generated token will be stored in the master database and also in a Redis cache. [0235] - The system will respond to the API request with the generated token. [0236] - If an OAuth token already exists, the system will check its validity. [0237] - If the token is expired, the system will generate a new token and respond. [0238] - If the token is still valid, the system will return the existing token.\ [0239] The inter process communication between Disk IO Hanlder and the wado processs is TCP based. Hence the service is implemented using TCP listener. The disk IO handler supports different types of deployments, Dedicated Disk IO Handler - In this setup, the es6-iServer running in a server configured to a dedicated disk where the dicom images are stored and retrieved. Hence the disk IO handler control on read permission is applied for the resource management pertaining to the particular server machine where es6-iServer is running. [0240] Shared Disk IO Handler - In this setup, the disk io handler manages the read request received from es6-iServer running in more than one server machines and the dicom filestore is shared between multiple es6-Server. Here the disk IO handler maintain queue based on each server, and control on read permission is applied for the resource management pertaining to the particular server machine where es6-iServer is running. [0241] An adaptive, fault-tolerant medical image processing architecture is disclosed that utilizes a hybrid compute pipeline comprising central processing units (CPUs) and graphical processing units (GPUs). This to perform dynamic image rendering, transformation, and encoding for clinical modality needs higher handling of workload complexity for the image processing. [0242] The architecture introduces an intelligent routing mechanism to offload trivial tasks such as image normalization, window level adjustments and enhancement to a GPU-based processing stream, particularly for large-scale modalities such as ^ mammography, while reserving CPU execution for low-latency, time-sensitive rendering tasks. [0243] A centralized memory orchestration layer coordinates zero-copy data exchange between compute agents, enabling high-throughput processing of diagnostic imaging data. The system guarantees operational continuity when GPU is not available on the hardware on the other hand it supports set-based asynchronous parallelism for optimal scalability across large study volumes. [0244] Medical imaging systems such as PACS and diagnostic platforms increasingly require rapid and scalable processing of high-resolution imaging data. Modalities such as Mammography (MG), Ultrasound (US), and Magnetic Resonance Imaging (MRI) produce multi-frame studies which has substantial resolution and data size. CPU-only architecture suffers pipeline processing and achieving performance under concurrent workload. This leads to latency in image rendering and creates bottlenecks in viewing applications, inefficiencies in clinical workflows. [0245] Existing image rendering frameworks are typically designed for homogeneous processing models. It either lacks support for GPU acceleration or it relies on very rigid GPU dependencies which causes unbalenced utilization of available hardware of CPU- GPU combination. Traditional systems do not offer intelligent task partitioning based on workload, resulting in sub-optimal resource utilization and poor scalability. [0246] The present invention introduces an Adaptive Heterogeneous dynamic Compute Pipeline (AHDCP) that dynamically orchestrates image rendering and transformation tasks across CPU and GPU compute domains efficiently. [0247] Dynamic Compute Dispatcher that routes low-latency requests (e.g., chest X- rays, CT scout views) to CPU execution pipelines, while scheduling high-complexity or high-volume rendering tasks (e.g., Mammography, MRI) to GPU-based processing queues. The viewport images in any study is always rendered on the CPU to ensure rapid initial preview, regardless of modality or system workload. [0248] A GPU Transformation Engine that executes image normalization, denoising, compression, and rasterization using high-performance libraries such as CUDA or NPP, in parallel, via multiple asynchronous streams. [0249] A unified memory orchestration layer that supports zero-copy data access across CPU and GPU agents, reducing memory footprint and latency during inter-agent data transfers. ^ [0250] A fault-tolerant fallback engine ensures continued operation in systems without GPU capabilities, rerouting GPU-bound image tasks to CPU rendering logic without disrupting workflows. [0251] A set-aware pipeline capable of batching image studies into grouped tasks for concurrent stream-based GPU processing, maintaining FIFO integrity and optimizing throughput. [0252] A Centralized Image Repository (CIR) that holds intermediate and final rasterized outputs for access by clinical viewers, middleware services, or storage backends. [0253] Imaging Request Orchestrator (IRO): [0254] Receives inbound study requests via DICOM protocols or PACS integrations. It extracts metadata such as modality type, study size, and priority class for task scheduling. [0255] Adaptive Imaging Orchestrator (AIO): [0256] Maintains stateful metadata about system resources, dispatch decisions, and task completion. Performs heuristic-driven task routing based on system telemetry and workload classification [0257] Intelligent Compute Dispatcher: [0258] Determines routing logic between CPU and GPU engines based on real-time CPU load, image resolution class, and latency sensitivity. Tasks are scored and assigned accordingly. [0259] GPU Transformation Engine (GTE): [0260] Executes computationally intensive image operations using parallel GPU processing. Supports batch transformation, multi-stream pipeline execution, and hardware-accelerated encoding (e.g., PNG conversion). [0261] CPU Rendering Engine (CRE): [0262] Serves low-latency diagnostic previews and also acts as the fallback mechanism for non-GPU systems. Uses multi-threaded execution models to accelerate performance when GPU is unavailable or overcommitted. [0263] Centralized Image Repository (CIR): [0264] A shared, zero-copy accessible memory buffer pool that stores rendered and transformed images. Enables synchronization between compute agents and decouples processing from delivery layers. [0265] Asynchronous Batch Manager (ABM): ^ [0266] Manages sets of image transformation tasks grouped by modality or study. Distributes workloads across asynchronous GPU streams to improve utilization and reduce queuing overhead. [0267] Routing and Execution Logic [0268] For high-priority preview slices (e.g., initial CT/XR images), the dispatcher routes to CPU pipelines to reduce load time. [0269] For large-scale studies (e.g., MG, MRI), the viewport images in the study is always sent to the CPU for initial rendering, regardless of system load or modality type. This ensures that the viewport images is immediately available for preview, even if the system is under heavy load or the GPU is occupied. The remaining slices in the study are sent to the GPU engine for batch processing. [0270] If GPU hardware is not detected or exceeds load thresholds, the fallback engine executes all tasks using CPU-based primitives, including the viewport images, with redundancy control to maintain workflow continuity. [0271] Zero-copy architecture via unified memory mapping for CPU-GPU co-access. [0272] Dynamic memory allocation scaled to study size, resolution, and processing stage. [0273] CIR memory pages support in-place image transformation and format conversion. [0274] Fault Tolerance [0275] All GPU tasks are redundantly registered with CPU fallback paths. [0276] CIR acts as persistent buffer for interrupted image jobs. [0277] Background validation engine verifies task integrity during and after processing. [0278] Use Cases [0279] PACS servers process over 100 concurrent studies per hour, needing parallel transformation and compression. [0280] Diagnostic viewer in mobile or resource-constrained environments defaulting to CPU-only rendering. [0281] It must also be noted that, as used in the specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Ranges may be expressed herein as from “about” or “ 5 approximately” one particular value and/or to “about” or “approximately” another ^ particular value. When such a range is expressed, other exemplary embodiments include one particular value and/or the other particular value. [0282] By “comprising” or “containing” or “including,” is meant that at least the name compound, element, particle, or method step is present in the composition or article or method but does not exclude the presence of other compounds, materials, particles, method steps, even if the other such compounds, material, particles, method steps have the same function as what is named. [0283] In describing example embodiments, terminology will be resorted to for the sake of clarity. It is intended that each term contemplates its broadest meaning as understood by those skilled in the art and includes all technical equivalents that operate in a similar manner to accomplish a similar purpose. It is also to be understood that the mention of one or more steps of a method does not preclude the presence of additional method steps or intervening method steps between those steps expressly identified. Steps of a method may be performed in a different order than those described herein without departing from the scope of the present disclosure. Similarly, it is also to be understood that the mention of one or more components in a device or system does not preclude the presence of additional components or intervening components between those components expressly identified. [0284] The following patents, applications and publications as listed below and throughout this document are hereby incorporated by reference in their entirety herein.

Claims

^ What is claimed: 1. A system comprising: a web viewer application configured to send viewer requests and receive messages; a reverse proxy server configured to receive and route the viewer requests to an image web server based on the request’s proxy configurations; an image web server configured to distribute the viewer requests to image manager servers; image manager servers configured to receive the viewer requests and generate child processes to prepare and render images; a database configured to store the images, store information of the viewer requests and the child processes, and store reserved memory information of the system; and a file I/O handler configured to manage the viewer requests to access the database using a waiting queue. The system of claim 1 further comprising an edge-computing architecture to improve the system performance. 3. A method comprising: receiving images from the image manager servers; storing the images in a filestore in a decompressed format; generating a child process to prepare a list of images in the decompressed format from the filestore; and loading the list of images in the decompressed format into an intermediate memory. 4. The method of claim 3 further comprising: receiving a requested image format from a viewer request; converting the images in the list of images from the decompressed format to the requested format; and transferring the converted images to a web viewer for displaying. 5. A method comprising: assessing an available memory of the system; receiving an image information in a viewer request for an image; ^ receiving a filesize of the image stored in a database; combining the filesize of the image and a memory consumption of other processes to get a combined memory consumption; and comparing the combined memory consumption with the available memory of the system. 6. The method of claim 5 further comprising: routing the viewer request to an image manager server when the combined memory consumption satisfies the available memory of the system. The method of claim 5 further comprising: generating an error message for the web viewer when the combined memory consumption exceeds the available memory of the system. 8. A method comprising: sending a viewer request from a web viewer to an image web server; forwarding the viewer request from the image web server to an image manager server using a session identifier (ID); and authenticating the session ID using an application programming interface (API). 9. The method of claim 8 further comprising: generating a child process for the viewer request; and storing the session ID and the child process in the image web server. 10. The method of claim 8 further comprising: validating the session ID and the child process; and processing the viewer request and sending a response back to the web viewer. 11. The method of claim 8 further comprising: validating the session ID and the child process; and generating an error message for the web viewer.
PCT/US2025/035045 2024-06-24 2025-06-24 Medical image management with channel selection and load balancing Pending WO2026006311A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202463663470P 2024-06-24 2024-06-24
US63/663,470 2024-06-24

Publications (1)

Publication Number Publication Date
WO2026006311A1 true WO2026006311A1 (en) 2026-01-02

Family

ID=98222798

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2025/035045 Pending WO2026006311A1 (en) 2024-06-24 2025-06-24 Medical image management with channel selection and load balancing

Country Status (1)

Country Link
WO (1) WO2026006311A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030149756A1 (en) * 2002-02-06 2003-08-07 David Grieve Configuration management method and system
US20050228870A1 (en) * 2002-11-11 2005-10-13 Openwave Systems Inc. Application-based protocol and proxy selection by a mobile device in a multi-protocol network environment
US20080098301A1 (en) * 2006-10-20 2008-04-24 Tyler James Black Peer-to-web broadcasting
US20120271905A1 (en) * 2004-03-31 2012-10-25 Qurio Holdings, Inc. Proxy caching in a photosharing peer-to-peer network to improve guest image viewing performance
US20200097339A1 (en) * 2013-04-01 2020-03-26 Oracle International Corporation Orchestration Service for a Distributed Computing System
US20220038902A1 (en) * 2020-11-13 2022-02-03 Markus Dominik Mueck Technologies for radio equipment cybersecurity and multiradio interface testing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030149756A1 (en) * 2002-02-06 2003-08-07 David Grieve Configuration management method and system
US20050228870A1 (en) * 2002-11-11 2005-10-13 Openwave Systems Inc. Application-based protocol and proxy selection by a mobile device in a multi-protocol network environment
US20120271905A1 (en) * 2004-03-31 2012-10-25 Qurio Holdings, Inc. Proxy caching in a photosharing peer-to-peer network to improve guest image viewing performance
US20080098301A1 (en) * 2006-10-20 2008-04-24 Tyler James Black Peer-to-web broadcasting
US20200097339A1 (en) * 2013-04-01 2020-03-26 Oracle International Corporation Orchestration Service for a Distributed Computing System
US20220038902A1 (en) * 2020-11-13 2022-02-03 Markus Dominik Mueck Technologies for radio equipment cybersecurity and multiradio interface testing

Similar Documents

Publication Publication Date Title
US20230145488A1 (en) Cross-Cloud Workload Identity Virtualization
CN119248483B (en) Providing tenant isolation in a multi-tenant API gateway using microservice containers
US10298666B2 (en) Resource management for multiple desktop configurations for supporting virtual desktops of different user classes
US9547858B2 (en) Real-time multi master transaction
US9935959B2 (en) Cloud service custom execution environment
US8875135B2 (en) Assigning component operations of a task to multiple servers using orchestrated web service proxy
US20160378534A1 (en) Apparatus and method for virtual desktop service
US20150067135A1 (en) Member-oriented hybrid cloud operating system architecture and communication method thereof
US9749445B2 (en) System and method for updating service information for across-domain messaging in a transactional middleware machine environment
EP3750274A1 (en) Method and apparatus for managing cloud services using smart contracts and blockchains
US20100241722A1 (en) Method and system for transporting telemetry data across a network
US20110153581A1 (en) Method for Providing Connections for Application Processes to a Database Server
US11301284B2 (en) Method for managing VNF instantiation and device
KR102803394B1 (en) A multi-level cache-mesh system for multi-tenant serverless environments
WO2022271223A9 (en) Dynamic microservices allocation mechanism
US20110283202A1 (en) User interface proxy method and system
US11706153B2 (en) Abstraction layer to cloud services
CN1701527A (en) Asynchronous messaging in storage area network
WO2026006311A1 (en) Medical image management with channel selection and load balancing
US10536389B1 (en) Biased selection of dedicated physical connections to provider network
CN118426947A (en) A method and device for processing cluster resources
US11102211B2 (en) Computer network for a secured access to online applications
CN116661979A (en) Heterogeneous Job Scheduling System and Method
CN112804279B (en) Request processing method and device
US8799439B2 (en) Managing attributes associated with an application server

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25826729

Country of ref document: EP

Kind code of ref document: A1