[go: up one dir, main page]

HK1199779B - Method, medium and system for reducing buffer usage for tcp proxy session based on delayed acknowledgment - Google Patents

Method, medium and system for reducing buffer usage for tcp proxy session based on delayed acknowledgment Download PDF

Info

Publication number
HK1199779B
HK1199779B HK15100133.1A HK15100133A HK1199779B HK 1199779 B HK1199779 B HK 1199779B HK 15100133 A HK15100133 A HK 15100133A HK 1199779 B HK1199779 B HK 1199779B
Authority
HK
Hong Kong
Prior art keywords
rtt
server
tcp
session
determining
Prior art date
Application number
HK15100133.1A
Other languages
Chinese (zh)
Other versions
HK1199779A1 (en
Inventor
L‧韩
Z‧曹
Original Assignee
A10网络股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/747,545 external-priority patent/US9531846B2/en
Application filed by A10网络股份有限公司 filed Critical A10网络股份有限公司
Publication of HK1199779A1 publication Critical patent/HK1199779A1/en
Publication of HK1199779B publication Critical patent/HK1199779B/en

Links

Description

Method, medium, and system for reducing cache usage of TCP proxy sessions based on delayed acknowledgements
Technical Field
The present invention relates generally to data communications, and more particularly to serving gateways.
Background
Many service gateways, such as firewalls and server load balancers, provide Transmission Control Protocol (TCP) proxy functionality for some time. Typical service applications of TCP proxies include network analytics, security, and traffic adaptation due to asymmetric client and server states. A TCP proxy server typically allocates a certain amount of memory cache to handle the data packet caching of the TCP proxy session between the client device and the server. The memory cache is used to handle data packet caching for client-side sessions and server-side sessions. Memory space allocation between client-side session send and receive caches and server-side session send and receive caches does not typically take performance into account. The TCP proxy receives data packets from the server-side session, processes the data packets according to the required service application, and sends the resulting data packets to the client-side session. In an ideal scenario, these steps are done before the next data packet from the server-side session is passed to the TCP proxy server. However, with many deployments, client devices access services through mobile broadband access or residual internet access with longer transmission times due to long-range wide area networks and slower transmission bandwidths based on subscriber access services. However, one or more TCP proxy servers are located within the same data center and enjoy short transmission times and high capacity bandwidth. In this deployment scenario, when the TCP proxy server receives data packets from a server-side session, the received data packets are placed in a server-side session receive cache and wait for an opportunity to be processed by the service application, in turn waiting for a client-side session to release a client-side session send cache, which is filled with earlier-processed pending data packets due to the slow transmission of the earlier-sent data packets, in turn waiting for an opportunity for their transmission.
Typically, a TCP proxy server sends a TCP acknowledgement according to the TCP protocol upon successfully receiving the appropriate amount of TCP data from the server. When the server receives a TCP acknowledgement of the previously sent TCP data, the server sends an additional TCP data packet to the TCP proxy. The TCP proxy server should further increase the memory space of the server-side session receive cache to store additional TCP data packets while waiting for previous TCP data to be processed and sent to the client. This cascading effect causes the TCP proxy server to consume a large amount of memory space for accommodating the server-side session receive cache required by the received TCP data packets of the server-side session. The larger the cache space used, the less memory resources available to the TCP proxy server to handle additional TCP proxy sessions; although the TCP proxy server may have other abundant resources to handle the additional load.
Disclosure of Invention
According to one embodiment of the invention, a method of reducing cache usage of a Transmission Control Protocol (TCP) proxy session between a client and a server, comprises: (a) determining a first Round Trip Time (RTT) for a server-side TCP session for a TCP proxy session between the service gateway and the server, and determining a second RTT for a client-side TCP session for a TCP proxy session between the service gateway and the client; (b) comparing, by the serving gateway, the first RTT to the second RTT; (c) determining whether the second RTT exceeds the first RTT; (d) in response to determining that the second RTT exceeds the first RTT, calculating, by the service gateway, a required RTT based on the second RTT; and (e) setting a timer by the serving gateway according to the calculated required RTT, wherein TCP acknowledgements of the server-side TCP session are delayed until the timer expires.
In one aspect of the present invention, determining (c) and calculating (d) comprise: c1) determining whether the second RTT exceeds the first RTT by a predetermined threshold; and d1) in response to determining that the second RTT exceeds the first RTT by the predetermined threshold, calculating, by the service gateway, a required RTT based on the second RTT.
In one aspect of the invention, calculating (d) comprises: d1) the required RTT is calculated by the serving gateway as a percentage of the second RTT.
In one aspect of the invention, calculating (d) comprises: (d1) the requested RTT is calculated by the gateway server as the second RTT minus a predetermined value.
In one aspect of the present invention, setting (e) comprises: (e1) receiving data packets from a server over a server-side TCP session through a service gateway; (e2) determining, by the service gateway, a need to send a TCP acknowledgement to the server; (e3) setting a timer to a required RTT through a service gateway; and (e4) in response to expiration of the timer, sending a TCP acknowledgement to the server through the service gateway.
Systems and computer readable media corresponding to the methods summarized above are also described and claimed herein.
Drawings
Fig. 1 illustrates a service gateway that provides services for a TCP proxy session between a client device and a server according to an embodiment of the present invention.
Figure 2 illustrates components of a service gateway according to an embodiment of the present invention.
Fig. 3 illustrates a process of delaying the transmission of a TCP ACK packet according to an embodiment of the present invention.
Detailed Description
The invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements. In a preferred embodiment, the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
Furthermore, the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a Random Access Memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W), and DVD.
A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers.
Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified native function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Embodiments of the present invention, as described below, adjust the service-side session transfer time to reduce cache usage, which in turn increases the TCP proxy session capability of the TCP proxy server. According to an embodiment of the invention, the TCP proxy server delays the server sending additional TCP data, wherein the delay allows the TCP proxy server to process and send the current TCP data to be processed and sent to the client in the server-side session receive cache. When the server sends additional TCP data after a delay, the TCP proxy server will have enough space in the server-side session receive cache to receive the additional TCP data. This delay extends the transmission time of the server-side session between the server and the TCP proxy server.
Fig. 1 shows a service gateway 300 serving a TCP proxy session 400 between a client device 100 and a server device 200 via a data network 153 according to an embodiment of the present invention.
In one embodiment, the data network 153 includes an Internet Protocol (IP) network, a corporate data network, a regional corporate data network, an internet service provider network, a residential data network, a wired network such as ethernet, a wireless network such as a WiFi network, or a cellular network. In one embodiment, the data network 153 is located in a data center or connected to a network or application network cloud.
Client device 100 is generally a computing device with network access capabilities. In one embodiment, the client device 100 is a workstation, desktop or laptop personal computer, Personal Data Assistant (PDA), tablet computing device, smart phone or cellular phone, set-top box, internet media browser, internet media player, smart sensor, smart medical device, network set-top box, networked television, networked DVR, networked blu-ray player, networked handheld gaming device, or media center.
In one embodiment, the client device 100 is a residential broadband gateway, a commercial Internet gateway, a commercial Web proxy server, a network Consumer Premise Equipment (CPE), or an Internet access gateway.
In one embodiment, client device 100 comprises a Broadband Remote Access Server (BRAS), a Digital Subscriber Line Access Multiplexer (DSLAM), a Cable Modem Termination System (CMTS), or a service provider access gateway.
In one embodiment, client device 100 comprises a mobile broadband access gateway such as a Gateway GPRS Support Node (GGSN), Home Agent (HA), or PDN Gateway (PGW).
In one embodiment, the client device 100 includes a server load balancer, an application delivery controller, a traffic manager, a firewall, a VPN server, a remote access server, or an enterprise or data center access gateway.
In one embodiment, client device 100 is a device similar to service gateway 300.
The client device 100 initiates a TCP session 400 to the server 200 via the service gateway 300.
The server 200 is a computing device that is generally coupled to a processor and a computer-readable medium that stores computer-readable program code. The server 200, having a processor and computer readable program code, implements the functionality of a Web server, file server, video server, database server, application server, voice system, conference server, media gateway, media center, application server, or Web server that provides TCP-based services or application services to the client device 100 using the TCP session 400.
In one embodiment, the server 200 is a device similar to the server gateway 300.
In one embodiment, TCP session 400 comprises an HTTP session, an FTP file transfer session, a TCP-based video streaming session, a TCP-based music streaming session, a file download session, a group conferencing session, a database access session, a remote terminal access session, a telecommunications network session, an e-commerce transaction, a remote program call, or a TCP-based network communication session.
The service gateway 300 shown in figure 2 is operatively coupled to a processor 310, a memory module 320, a network interface module 330, and a computer-readable medium 340. The computer-readable medium 340 stores computer-readable program code that, when executed by the processor 310 using the memory module 320, implements embodiments of the invention as described herein. In some embodiments, the service gateway 300 is implemented as a server load balancer, an application delivery controller, a service delivery platform, a traffic manager, a security gateway, a component of a firewall system, a component of a Virtual Private Network (VPN), a load balancer of a video server, a gateway that distributes load to one or more servers, a Web or HTTP server, a Network Address Translation (NAT) gateway, or a TCP proxy server.
In one embodiment, computer-readable medium 340 includes instructions for service application 350, and processor 310 executes service application 350.
In one embodiment, service application 350 implements the functionality of a VPN firewall, a gateway security application, an HTTP proxy, a TCP-based audio or video streaming session proxy, a Web session proxy, content filtering, server load balancing, a firewall, or a Web application session proxy.
Returning to fig. 1, in one embodiment of providing a service to a TCP proxy session 400 between a client device 100 and a server 200, a service gateway 300 establishes a client-side TCP session 420 with the client device 100 and a server-side TCP session 470 with the server 200.
In one embodiment, the service gateway 300 allocates a receive buffer 474 for the server-side TCP session 470. In one embodiment, the receive buffer 474 resides in the memory module 320.
In one embodiment, the service gateway 300 monitors the performance of the TCP session 470 using a Round Trip Time (RTT)497 of the TCP session 470. The serving gateway 300 measures or estimates the RTT 497 of the TCP session 470. In an example embodiment, the service gateway 300 measures the RTT 497 based on a length of time between when the service gateway 300 sends a data packet of the TCP session 470 to the server 200 and when the service gateway 300 receives an acknowledgement of sending the data packet. In one embodiment, the service gateway 300 measures the RTT 497 periodically or occasionally during the TCP session 470. In one embodiment, service gateway 300 estimates RTT 497 based on one or more previous server-side TCP sessions with server 200. In one embodiment, the serving gateway 300 estimates the RTT 497 as 10 milliseconds, 100 milliseconds, 3 milliseconds, 22 milliseconds, or 3 seconds.
In one embodiment, service gateway 300 retrieves data from receive buffer 474, processes the data, in one embodiment, through service application 350, and sends the processed data to client device 100 through TCP session 420. In one embodiment, service gateway 300 processes data from receive buffer 474 whenever TCP session 420 is ready for transmission. The slow transmission of TCP session 420 causes service gateway 300 to delay processing data from receive buffer 474. In one embodiment, service gateway 300 monitors the performance of TCP session 420 using the Round Trip Time (RTT)492 of TCP session 420. Serving gateway 300 measures or estimates RTT492 of TCP session 420. In an exemplary embodiment, serving gateway 300 measures RTT492 based on a time period between a time when serving gateway 300 transmits a data packet of TCP session 420 to client device 100 and a time when serving gateway 300 receives an acknowledgement of the transmitted data packet. In one embodiment, serving gateway 300 measures RTT492 periodically or occasionally during TCP session 420. In one embodiment, service gateway 300 estimates RTT492 based on one or more prior client-side TCP sessions with client device 100. In one embodiment, serving gateway 300 estimates RTT492 to be 10 milliseconds, 100 milliseconds, 3 milliseconds, 22 milliseconds, or 3 seconds.
In one embodiment, serving gateway 300 compares RTT 497 with RTT 492. In one embodiment, when serving gateway 300 determines that RTT492 exceeds RTT 497 by a certain threshold, serving gateway 300 applies processing as described below to adjust RTT 497 to narrow the gap between RTT492 and RTT 497. In one embodiment, RTT492 is determined to exceed RTT 497 by the threshold when RTT492 is at least 2, 5, or 10 times greater than RTT 497 or when RTT492 is at least greater than RTT 497 by a predetermined amount (e.g., 20, 50, or 200 milliseconds).
In one embodiment, serving gateway 300 determines that RTT492 does not exceed RTT 497 by the threshold and serving gateway 300 does not adjust RTT 497.
In one embodiment, service gateway 300 regularly or occasionally measures RTT492 and RTT 497 and compares RTT492 with RTT 497.
Fig. 3 illustrates a process for adjusting the RTT 497 of the server side TCP session 470 according to an embodiment of the present invention. In one embodiment, the service gateway 300 receives the data packet 480 on the TCP session 470 from the server 200. Service gateway 300 stores data packet 480 in receive buffer 474. In one embodiment, service gateway 300 determines from receive buffer 474 the need to send TCP acknowledgements in accordance with the TCP protocol. Rather than immediately sending the TCK ACK data packet 479, the serving gateway 300 schedules the TCP ACK data packet 479 to be sent at a later time using a timer 487. Serving gateway 300 sets timer 487 to the required RTT 498. When timer 487 expires, serving gateway 300 sends TCP ACK data packet 479. In one embodiment, service gateway 300 includes a clock (not shown) that allows service gateway 300 to determine whether timer 487 has expired.
In one embodiment, serving gateway 300 calculates the required RTT498 based on RTT 492. In one embodiment, the required RTT498 is calculated to fall within the actual range of RTT 492. For example, the required RTT498 is calculated as a predetermined percentage of RTT492, such as 30%, 40%, 60%, or 75% of RTT 492. In one embodiment, the required RTT498 is calculated as RTT492 minus a predetermined value, such as 10 milliseconds, 5 milliseconds, or 25 milliseconds. The required RTT498 provides a timing delay for sending a TCP acknowledgment for the TCP session 470 and thereby increases the round trip time for the TCP session 470. When serving gateway 300 measures RTT 497 as shown in fig. 1 after sending TCP ACK data packet 479, RTT 497 is expected to have a value similar to required RTT 498.
In one embodiment, serving gateway 300 performs a process of measuring RTT 497, RTT492, comparing RTT492 to RTT 497, and performing the processing steps of FIG. 3 when serving gateway 300 determines that RTT492 is significantly greater than RTT 497, thereby reducing the memory capacity of receive cache 474, which in turn increases the ability of serving gateway 300 to process additional TCP proxy sessions.
In one embodiment, the percentage or predetermined value of predetermined RTT492 is determined empirically by the user, i.e., a variety of percentages and values are used for different TCP proxy sessions for different clients and servers. Typically, the smaller the difference between RTT492 and RTT 497, the smaller the required memory capacity of the receive buffer 474. In one embodiment, the user configures the required RTT498 to reduce the difference between RTT 497 and RTT 492. In one embodiment, the predetermined percentage is between 30% and 50% and is configured by the user for the service gateway 300. The user can configure a predetermined percentage of higher value or required RTT498 for smaller receive buffer 474 capacities and a predetermined percentage of lower value or required RTT498 for larger receive buffer 474 capacities. The user may consider a predetermined percentage or value to balance between the capacity of the receive buffer 474 and the required RTT 498.
Although the present invention has been described in accordance with the embodiments shown, one of ordinary skill in the art will readily recognize that there could be variations to the embodiments and those variations would be within the spirit and scope of the present invention. Accordingly, many modifications may be made by one of ordinary skill in the art without departing from the spirit and scope of the appended claims.

Claims (15)

1. A method of reducing cache usage of a Transmission Control Protocol (TCP) proxy session between a client and a server, the method comprising:
(a) determining a first Round Trip Time (RTT) for a server-side TCP session for a TCP proxy session between the service gateway and the server, and determining a second RTT for a client-side TCP session for a TCP proxy session between the service gateway and the client;
(b) comparing, by the serving gateway, the first RTT to the second RTT;
(c) determining whether the second RTT exceeds the first RTT;
(d) in response to determining that the second RTT exceeds the first RTT, calculating, by the service gateway, a required RTT based on the second RTT; and
(e) setting, by the serving gateway, a timer according to the calculated required RTT, wherein TCP acknowledgements of the server-side TCP session are delayed until the timer expires.
2. The method of claim 1, wherein the determining (c) comprises:
c1) determining whether the second RTT exceeds the first RTT by a predetermined threshold; and
the calculating (d) comprises:
d1) in response to determining that the second RTT exceeds the first RTT by the predetermined threshold, calculating, by the serving gateway, a required RTT based on the second RTT.
3. The method of claim 1, wherein the calculating (d) comprises:
d1) the required RTT is calculated by the serving gateway as a percentage of the second RTT.
4. The method of claim 1, wherein calculating (d) comprises:
(d1) the requested RTT is calculated by the gateway server as the second RTT minus a predetermined value.
5. The method of claim 1, wherein the setting (e) comprises:
(e1) receiving data packets from a server over a server-side TCP session through a service gateway;
(e2) determining, by the service gateway, a need to send a TCP acknowledgement to the server;
(e3) setting a timer to a required RTT through a service gateway; and
(e4) in response to expiration of the timer, a TCP acknowledgement is sent to the server through the service gateway.
6. A non-transitory computer readable medium having computer readable program code embedded therein to reduce cache usage of a Transmission Control Protocol (TCP) proxy session between a client and a server, the computer readable program code configured to:
(a) determining a first Round Trip Time (RTT) for a server-side TCP session for a TCP proxy session between the service gateway and the server, and determining a second RTT for a client-side TCP session for a TCP proxy session between the service gateway and the client;
(b) comparing the first RTT with the second RTT;
(c) determining whether the second RTT exceeds the first RTT;
(d) in response to determining that the second RTT exceeds the first RTT, calculating a required RTT based on the second RTT; and
(e) setting a timer according to the calculated required RTT, wherein TCP acknowledgements of the server side TCP session are delayed until the timer expires.
7. The medium of claim 6, wherein the computer readable program code configured to determine (c) and calculate (d) is further configured to:
c1) determining whether the second RTT exceeds the first RTT by a predetermined threshold; and
d1) in response to determining that the second RTT exceeds the first RTT by the predetermined threshold, calculating, by the serving gateway, a required RTT based on the second RTT.
8. The medium of claim 6, wherein the computer readable program code configured to calculate (d) is further configured to:
d1) the required RTT is calculated as a percentage of the second RTT.
9. The medium of claim 6, wherein the computer readable program code configured to calculate (d) is further configured to:
(d1) the required RTT is calculated as the second RTT minus a predetermined value.
10. The medium of claim 6, wherein the computer readable program code configured to set (e) is further configured to:
(e1) receiving data packets from a server on a server-side TCP session;
(e2) determining a need to send a TCP acknowledgement to a server;
(e3) setting a timer to a required RTT; and
(e4) in response to expiration of the timer, a TCP acknowledgement is sent to the server.
11. A system for reducing cache usage of a Transmission Control Protocol (TCP) proxy session between a client and a server, comprising:
a security gateway, wherein a server-side TCP session of a TCP proxy session is established between the security gateway and the server, and a client-side TCP session of the TCP proxy session is established between the security gateway and the client, the security gateway:
(a) determining a first Round Trip Time (RTT) for a server-side TCP session for a TCP proxy session between the service gateway and the server, and determining a second RTT for a client-side TCP session for a TCP proxy session between the service gateway and the client;
(b) comparing the first RTT with the second RTT;
(c) determining whether the second RTT exceeds the first RTT;
(d) in response to determining that the second RTT exceeds the first RTT, calculating a required RTT based on the second RTT; and
(e) setting a timer according to the calculated required RTT, wherein TCP acknowledgements of the server side TCP session are delayed until the timer expires.
12. The system of claim 11, wherein the determining (c) comprises:
c1) determining whether the second RTT exceeds the first RTT by a predetermined threshold; and
the calculating (d) comprises:
d1) in response to determining that the second RTT exceeds the first RTT by the predetermined threshold, a requested RTT is calculated based on the second RTT.
13. The system of claim 11, wherein the calculating (d) comprises:
d1) the required RTT is calculated as a percentage of the second RTT.
14. The system of claim 11, wherein the calculating (d) comprises:
(d1) the required RTT is calculated as the second RTT minus a predetermined value.
15. The system of claim 11, wherein the setting (e) further comprises:
(e1) receiving data packets from a server on a server-side TCP session;
(e2) determining a need to send a TCP acknowledgement to a server;
(e3) setting a timer to a required RTT; and
(e4) in response to expiration of the timer, a TCP acknowledgement is sent to the server.
HK15100133.1A 2013-01-23 2015-01-07 Method, medium and system for reducing buffer usage for tcp proxy session based on delayed acknowledgment HK1199779B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/747,545 US9531846B2 (en) 2013-01-23 2013-01-23 Reducing buffer usage for TCP proxy session based on delayed acknowledgement
US13/747,545 2013-01-23

Publications (2)

Publication Number Publication Date
HK1199779A1 HK1199779A1 (en) 2015-07-17
HK1199779B true HK1199779B (en) 2018-09-21

Family

ID=

Similar Documents

Publication Publication Date Title
US9979665B2 (en) Reducing buffer usage for TCP proxy session based on delayed acknowledgement
US9154584B1 (en) Allocating buffer for TCP proxy session based on dynamic network conditions
EP2772026B1 (en) Methods to combine stateless and stateful server load balancing
US9641650B2 (en) TCP proxy server
US10251091B2 (en) Traffic offloading method and apparatus
JP6178523B2 (en) Transport accelerator implementing request manager and connection manager functionality
US8699343B2 (en) Adaptive rate control based on overload signals
JP2019520745A (en) System and method for improving the total throughput of simultaneous connections
EP2992648A1 (en) Rate control
KR20140143355A (en) Tcp congestion control for large latency networks
US10868839B2 (en) Method and system for upload optimization
US20180048581A1 (en) Communication apparatus and communication method
HK1199779B (en) Method, medium and system for reducing buffer usage for tcp proxy session based on delayed acknowledgment
US10237323B2 (en) Communication apparatus, communication method, communication system, and storage medium
HK1189438B (en) Method to allocate buffer for tcp proxy session based on dynamic network conditions
HK1189438A (en) Method to allocate buffer for tcp proxy session based on dynamic network conditions
Siracusano et al. Tcp proxy bypass: All the gain with no pain!
US20240298051A1 (en) Data relay apparatus, distribution system, data relay method, and computer-readable medium
HK1199153B (en) Method and system of accelerating service processing using fast path tcp