[go: up one dir, main page]

CN116170523A - Service transmission method, device and storage medium - Google Patents

Service transmission method, device and storage medium Download PDF

Info

Publication number
CN116170523A
CN116170523A CN202211688302.3A CN202211688302A CN116170523A CN 116170523 A CN116170523 A CN 116170523A CN 202211688302 A CN202211688302 A CN 202211688302A CN 116170523 A CN116170523 A CN 116170523A
Authority
CN
China
Prior art keywords
service
server
business
value
coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211688302.3A
Other languages
Chinese (zh)
Inventor
于春梅
王健
王泽源
李争欣
李斯哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202211688302.3A priority Critical patent/CN116170523A/en
Publication of CN116170523A publication Critical patent/CN116170523A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请涉及通信技术领域,尤其涉及一种业务传输方法、装置及存储介质,能够有效降低服务器的负载情况。该方法包括:在服务器的负载值大于预设阈值的情况下,确定业务缓存队列;业务缓存队列用于缓存服务器向终端发送的业务;确定业务缓存队列中的目标业务;目标业务为业务缓存队列中优先级满足预设条件的业务;确定服务器当前的可用负载是否大于目标业务的业务大小;若是,则向终端发送目标业务。本申请用于业务传输过程中。

Figure 202211688302

The present application relates to the field of communication technology, and in particular to a service transmission method, device and storage medium, which can effectively reduce the load of a server. The method includes: determining a service cache queue when the load value of the server is greater than a preset threshold; the service cache queue is used to cache services sent by the server to the terminal; determining a target service in the service cache queue; the target service is the service cache queue The business whose medium priority meets the preset conditions; determine whether the current available load of the server is greater than the business size of the target business; if so, send the target business to the terminal. This application is used in the process of business transmission.

Figure 202211688302

Description

Service transmission method, device and storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a service transmission method, a device, and a storage medium.
Background
In the related art, when a terminal requests a service from a back-end server, a reverse server is generally set in the back-end server to implement load balancing of the back-end server. In this scenario, the terminal sends a service request to the reverse server, and the reverse server selects a back-end server that processes the service request according to the service type of the service request and the load condition of the back-end server, and sends the service request to the selected back-end server. The back-end server processes the service request and sends the service response information of the service to the reverse server, and the reverse server forwards the service response to the terminal.
However, when the reverse server forwards the service response to the terminal, the load of the reverse server may be overrun due to more service responses, so how to reduce the load of the reverse server becomes a technical problem to be solved currently.
Disclosure of Invention
The application provides a service transmission method, a service transmission device and a storage medium, which can effectively reduce the load of a server.
In order to achieve the above purpose, the present application adopts the following technical scheme:
in a first aspect, the present application provides a service transmission method, where the method includes: determining a service cache queue under the condition that the load value of the server is larger than a preset threshold value; the service cache queue is used for caching the service sent by the server to the terminal; determining the priority of each service in a service cache queue; wherein the priority is determined according to at least one of: the method comprises the steps of loading values of a server, service types of services, service sizes of the services and time lengths of the services in a cache queue; taking the service with the highest priority in the service cache queue as a target service; determining whether the current available load of the server is larger than the service size of the target service; if yes, the target service is sent to the terminal.
With reference to the first aspect, in one possible implementation manner, determining a priority of each service in the service buffer queue includes: determining the sum of a first weight value, a first parameter value, a second parameter value, a third parameter value and a fourth parameter value of an ith service in a service cache queue as the priority of the ith service; the first weight value is the weight value of the service type of the ith service; the first parameter value is the product of the first coefficient and the load value of the server; the second parameter value is the product of the second coefficient and the load value of the server; the third parameter value is the product of the third coefficient and the service size of the ith service; the fourth parameter value is the product of the fourth coefficient and the duration of the ith service in the service buffer queue.
With reference to the first aspect, in one possible implementation manner, the priority W of the ith service in the service buffer queue i The following is satisfied:
W i =E i +A*F+B*G+C*U i +D*V i
wherein E is i A weight value for a service type denoted as i-th service; a is represented as a first coefficient; f is expressed as a read-write margin value of input-output equipment of the server; b is represented as a second coefficient; g is expressed as a network margin value of an input and output device of the server; c is represented as a third coefficient; u (U) i A service size denoted as i-th service; d is represented as a fourth coefficient; v (V) i Denoted as the length of time that exists in the ith traffic buffer queue. The first coefficient, the second coefficient, the third coefficient and the fourth coefficient are preset values.
With reference to the first aspect, in one possible implementation manner, the load value of the server includes a read-write margin value and a network margin value of an input-output device of the server; determining whether the current available load of the server is greater than the traffic size of the target traffic includes: determining whether both the available read-write margin value of the input-output device and the available network margin value of the input-output device are larger than the service size of the target service; sending the target service to the terminal, comprising: if not, sending the target service to the terminal in a temporary manner; if yes, the target service is sent to the terminal.
In a second aspect, the present application provides a service transmission apparatus, including: a processing unit and a communication unit; the processing unit is used for determining a service cache queue under the condition that the load of the server is larger than a preset threshold value; the service cache queue is used for caching the service sent by the server to the terminal; the processing unit is also used for determining the priority of each service in the service cache queue; wherein the priority is determined according to at least one of: the method comprises the steps of loading values of a server, service types of services, service sizes of the services and time lengths of the services in a cache queue; the processing unit is also used for taking the service with the highest priority in the service cache queue as a target service; the processing unit is also used for determining whether the current available load of the server is larger than the service size of the target service; if yes, the communication unit is used for sending the target service to the terminal.
With reference to the second aspect, in one possible implementation manner, determining a priority of each service in the service buffer queue includes: the processing unit is further used for determining that the sum of the first weight value, the first parameter value, the second parameter value, the third parameter value and the fourth parameter value of the ith service in the service cache queue is the priority of the ith service; the first weight value is the weight value of the service type of the ith service; the first parameter value is the product of the first coefficient and the load value of the server; the second parameter value is the product of the second coefficient and the load value of the server; the third parameter value is the product of the third coefficient and the service size of the ith service; the fourth parameter value is the product of the fourth coefficient and the duration of the ith service in the service buffer queue.
With reference to the second aspect, in one possible implementation manner, the priority W of the ith service in the service buffer queue i The following is satisfied:
W i =E i +A*F+B*G+C*U i +D*V i
wherein E is i A weight value for a service type denoted as i-th service; a is represented as a first coefficient; f is expressed as a read-write margin value of input-output equipment of the server; b is represented as a second coefficient; g is expressed as a network margin value of an input and output device of the server; c is represented as a third coefficient; u (U) i A service size denoted as i-th service; d is represented as a fourth coefficient; v (V) i Denoted as the length of time that exists in the ith traffic buffer queue. First, second and third coefficientsThe fourth coefficient is a preset value.
With reference to the second aspect, in one possible implementation manner, the load value of the server includes a read-write margin value and a network margin value of an input-output device of the server; determining whether the current available load of the server is greater than the traffic size of the target traffic includes: the processing unit is also used for determining whether the available read-write margin value of the input/output equipment and the available network margin value of the input/output equipment are both larger than the service size of the target service; sending the target service to the terminal, comprising: if not, the processing unit delays sending the target service to the terminal; if yes, the communication unit sends the target service to the terminal.
In a third aspect, the present application provides a service transmission apparatus, including: a processor and a communication interface; the communication interface is coupled to a processor for running a computer program or instructions to implement the method of traffic transmission as described in any one of the possible implementations of the first aspect and the first aspect.
In a fourth aspect, the present application provides a computer readable storage medium having instructions stored therein which, when run on a terminal, cause the terminal to perform a traffic transmission method as described in any one of the possible implementations of the first aspect and the first aspect.
In this application, the names of the above-mentioned service transmission apparatuses do not constitute limitations on the devices or function modules themselves, and in actual implementation, these devices or function modules may appear under other names. Insofar as the function of each device or function module is similar to the present application, it is within the scope of the claims of the present application and the equivalents thereof.
These and other aspects of the present application will be more readily apparent from the following description.
Based on the above technical scheme, in the service transmission method provided by the embodiment of the application, under the condition that the load value of the server is greater than the preset threshold, the buffer queue of the service is determined, the target service is determined from the service buffer queue, and under the condition that the current available load of the server is greater than the service size of the target service, the service transmission device sends the target service to the terminal, based on the technical scheme, the server buffers the service into the service buffer queue under the condition that the load is higher, and only sends the service with higher priority and meeting the current load requirement of the server, thereby effectively reducing the load of the server and avoiding the occurrence of the full load condition of the server.
Drawings
Fig. 1 is a schematic structural diagram of a service transmission device provided in the present application;
fig. 2 is a flowchart of a service transmission method provided in the present application;
fig. 3 is a flowchart of another service transmission method provided in the present application;
fig. 4 is a flowchart of another service transmission method provided in the present application;
fig. 5 is a flowchart of another service transmission method provided in the present application;
fig. 6 is a schematic structural diagram of another service transmission device provided in the present application;
fig. 7 is a schematic structural diagram of another service transmission device provided in the present application.
Detailed Description
The service transmission method and device provided by the embodiment of the application are described in detail below with reference to the accompanying drawings.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
The terms "first" and "second" and the like in the description and in the drawings are used for distinguishing between different objects or for distinguishing between different processes of the same object and not for describing a particular sequential order of objects.
Furthermore, references to the terms "comprising" and "having" and any variations thereof in the description of the present application are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or apparatus.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
Fig. 1 is a schematic structural diagram of a service transmission device according to an embodiment of the present application, and as shown in fig. 1, the service transmission device 100 includes at least one processor 101, a communication line 102, at least one communication interface 104, and may further include a memory 103. The processor 101, the memory 103, and the communication interface 104 may be connected through a communication line 102.
The processor 101 may be a central processing unit (central processing unit, CPU), an application specific integrated circuit (application specific integrated circuit, ASIC), or one or more integrated circuits configured to implement embodiments of the present application, such as: one or more digital signal processors (digital signal processor, DSP), or one or more field programmable gate arrays (field programmable gate array, FPGA).
Communication line 102 may include a pathway for communicating information between the aforementioned components.
The communication interface 104, for communicating with other devices or communication networks, may use any transceiver-like device, such as ethernet, radio access network (radio access network, RAN), wireless local area network (wireless local area networks, WLAN), etc.
The memory 103 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, or an electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), a compact disc read-only memory (compact disc read-only memory) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to include or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
In a possible design, the memory 103 may exist independent of the processor 101, i.e. the memory 103 may be a memory external to the processor 101, where the memory 103 may be connected to the processor 101 through a communication line 102 for storing execution instructions or application program codes, and the execution is controlled by the processor 101 to implement a network quality determining method provided in the embodiments described below. In yet another possible design, the memory 103 may be integrated with the processor 101, i.e., the memory 103 may be an internal memory of the processor 101, e.g., the memory 103 may be a cache, and may be used to temporarily store some data and instruction information, etc.
As one implementation, processor 101 may include one or more CPUs, such as CPU0 and CPU1 in fig. 1. As another implementation, the traffic transmission 100 may include multiple processors, such as the processor 101 and the processor 107 in fig. 1. As yet another implementation, the traffic transmission 100 may also include an output device 105 and an input device 106.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the network node is divided into different functional modules to implement all or part of the functions described above. The specific working processes of the above-described system, module and network node may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.
Nginx (engine x) is a high performance hypertext transfer protocol (Hyper Text TransferProtocol, HTTP) and reverse proxy web server, while also providing IMAP/POP3/SMTP services. In the related art, a terminal sends a request to a back-end server through a back-end server (engine x, ng ix), the back-end server processes the request and feeds back the request to the terminal through the back-end server, in the process, input/Output (I/O) devices on the back-end server are always fully loaded, and no method for effectively reducing the load condition of the I/O devices exists at present; meanwhile, in the prior art, the priority of the response cannot be distinguished, so that the user can only receive useless resources such as static files or pictures after waiting for a long time.
In addition, the existing reverse server can cause the blocking of the interface request of the normal I/O equipment under the condition that a large number of static files are received by an external malicious request, so that network attack cannot be effectively resisted, and normal feedback response cannot be realized.
In order to solve the problem that the load of a server cannot be effectively reduced in the prior art, the application provides a service transmission method as shown in fig. 2. The method comprises the steps of determining a service cache queue and determining target service in the service cache queue under the condition that the load of a server is larger than a preset threshold, and finally determining whether the load required for transmitting the target service is smaller than the current available load of the server, and if so, sending the target service to a terminal. Based on the technical scheme, the server caches the service into the service cache queue under the condition of higher load, and only sends the request which has higher priority and meets the current load requirement of the server, so that the load of the server is effectively reduced, and the occurrence of the condition of full load of the server is avoided.
As shown in fig. 2, a flowchart of a service transmission method provided in an embodiment of the present application is shown, where the service transmission method provided in the embodiment of the present application may be applied to the service transmission device shown in fig. 1, and the service transmission method provided in the embodiment of the present application may be implemented by the following steps.
S201, the service transmission device determines a service cache queue under the condition that the load of the server is larger than a preset threshold value.
The service buffer queue is used for buffering the service sent by the server to the terminal.
In one possible implementation, the server may also be understood as a reverse server, with the load of the server including the load of the I/O devices on the reverse server and other loads on the server. In this application, the load of the server will be mainly described as an example of the load of the I/O device of the server.
In a specific implementation manner, a service transmission device monitors the load condition of an I/O device, compares the load of the I/O device with a preset threshold in the service transmission device, and if the load of the I/O device is greater than or equal to the preset threshold, a server starts a service buffer queue to buffer a service currently required to be sent to a terminal into a service buffer queue.
The service transmission device monitors the load of the server, determines that the current load of the server is 50MB, compares the current load of the server with a preset threshold value 40MB, determines that the load 50MB of the server is greater than the preset threshold value 40MB, instructs the server to start a service buffer queue at this time, and buffers a plurality of services fed back by the back-end server into the service buffer queue.
Optionally, the service cached in the service cache queue includes service content, service size, and a timestamp of the service entering the cache queue.
It should be noted that, in the embodiment of the present application, the server refers to a reverse server, and the backend server is a server that processes a service request of a terminal.
S202, the service transmission device determines the target service in the service buffer queue.
The target service is a service with priority meeting preset conditions in a service cache queue.
In one possible implementation manner, the service transmission device determines the priority of each service in the service buffer queue, and determines that the service with the priority meeting the preset condition is the target service.
In one possible implementation manner, after the service transmission device sends the target service to the terminal every preset time or every time, the service transmission device recalculates the priority of each service in the service buffer queue, and redetermines the service with the current priority meeting the preset condition as the target service.
S203, the service transmission device determines whether the current available load of the server is larger than the service size of the target service.
S204, if yes, the service transmission device sends the target service to the terminal.
For example, in the case where the service transmission device determines that the current available load of the server is 30MB and the service size of the target service is 10MB, the service transmission device determines that the current available load of the server is greater than the service size of the target service. That is, the service transmission apparatus determines that the current available load of the server satisfies the requirement of transmitting the target service, and at this time, the service transmission apparatus may transmit the target service to the terminal.
Compared with the prior art, the service transmission method provided by the embodiment of the invention has the advantages that the service transmission device transmits the target service to the terminal only when the current available load of the server is larger than the service size of the target service, and the service transmission device buffers the service to the service buffer queue under the condition that the load of the server is higher, only transmits the service with higher priority and meets the current load requirement of the server, thereby effectively reducing the load of the server and avoiding the occurrence of the full load condition of the server.
In the following, a detailed description will be given of how the service transmission apparatus determines the target service in the service buffer queue in the embodiment of the present application, in connection with a specific embodiment.
Referring to fig. 2, as shown in fig. 3, how the service transmission device determines the target service in the service buffer queue may be specifically implemented by the following S301-S302, which are described in detail below:
s301, the service transmission device determines the priority of each service in the service buffer queue.
Wherein the priority is determined according to at least one of: the load value of the server, the service type of the service, the service size of the service, and the duration of the service in the cache queue.
In one possible implementation manner, the service transmission device calculates and determines the priority of each service in the service buffer queue according to at least one of the load value of the server, the service type of the service, the service size of the service and the duration of the service in the buffer queue.
In a specific implementation manner, the service transmission device determines that the sum of the first weight value, the first parameter value, the second parameter value, the third parameter value and the fourth parameter value of the ith service in the service buffer queue is the priority of the ith service.
The first weight value is the weight value of the service type of the ith service; the first parameter value is the product of the first coefficient and the load value of the server; the second parameter value is the product of the second coefficient and the load value of the server; the third parameter value is the product of the third coefficient and the service size of the ith service; the fourth parameter value is the product of the fourth coefficient and the duration of the ith service in the service buffer queue.
The load value of the server includes, as an example, a read-write margin value and a network margin value of an input-output device of the server.
Priority W of ith service i The following equation 1 is satisfied:
W i =E i +A*F+B*G+C*U i +D*V i equation 1
Wherein E is i A weight value for a service type denoted as i-th service; a is represented as a first coefficient; f is expressed as a read-write margin value of input-output equipment of the server; b is represented as a second coefficient; g represents the network margin of the input/output device of the serverA degree value; c is represented as a third coefficient; u (U) i A service size denoted as i-th service; d is represented as a fourth coefficient; v (V) i Denoted as the length of time that exists in the ith traffic buffer queue. The first coefficient, the second coefficient, the third coefficient and the fourth coefficient are preset values.
For example, the first coefficient, the second coefficient, the third coefficient, and the fourth coefficient may be determined according to an empirical value of an operation and maintenance person.
The service type of the service A is static file acquisition, the service type of the service B is database data acquisition, the service type of the service C is user entity operation, the weight value of the static file acquisition is 2.5, the weight value of the database data acquisition is 3.5, and the weight value of the user entity operation is 4.
At this time, the service transmission apparatus calculates the priority W of the service a according to the above formula 1 A =2.5+5*2mb/s+5*5mb/s+(-6)*0.5mb+0.008*1000ms=42.5。
The service transmission device calculates the priority W of the service B B =3.5+5*2mb/s+5*5mb/s+(-2)*2mb+0.005*2000ms=44.5。
The service transmission device calculates the priority W of the service C C =4+5*2mb/s+5*5mb/s+2*0.8mb+0.007*1500ms=51.1。
S302, the service transmission device takes the service with the highest priority in the service buffer queue as the target service.
In combination with the example in S301, the service transmission device compares the priority 42.5 of the service a, the priority 44.5 of the service B, and the priority 51.1 of the service C to determine that the priority of the service C is the highest, and at this time, the service transmission device determines that the service C is the target service.
Based on the technical characteristics, the service transmission method provided by the application determines the priority of each service in the service buffer queue through the service transmission device, determines the service with the highest priority according to the priority of each service, and firstly feeds back the service with the highest priority when feeding back the service to the terminal, thereby ensuring the user experience of the service with the high priority.
In the above, it is described in detail how the service transmission apparatus determines the target service in the service buffer queue.
The specific implementation manner of whether the service transmission device feeds back the target service step to the terminal will be described in detail below.
As shown in fig. 4, whether the service transmission device feeds back the target service to the terminal may be specifically implemented through S401-S403, which is described in detail below:
s401, the service transmission device determines whether the available read-write margin value of the input/output device and the available network margin value of the input/output device are both larger than the service size of the target service.
In one possible implementation, the service transmission apparatus compares the available read-write margin value and the available network margin value of the input-output device with the service size of the target service, respectively, and determines whether the available read-write margin value and the available network margin value are both greater than the service size of the target service. The service size of the target service may be understood as a data size of the service, where the service size is in mb. The available read-write margin value of the input-output device can be understood as the capacity of the IO device that the server can currently provide for reading and writing, and the available network margin value of the input-output device can be understood as the network bandwidth capacity that the server can currently provide for transmission.
S402, if not, the service transmission device pauses to send the target service to the terminal.
Illustratively, in combination with the examples in S301 and S302 above, service C is a target service, and the service size of service C is 0.8MB. The service transmission device determines that the available read-write margin value of the input and output equipment is 0.5MB at the moment, the available network margin value of the input and output equipment is 15MB, compares the available read-write margin value of the input and output equipment with the service size of the service C of 0.8MB, and then compares the available network margin value of the input and output equipment of 15MB with the service size of the service C of 0.8MB.
The service transmission device determines that the available read-write margin value of the input/output equipment is 0.5MB and is smaller than the service size of the service C and is 0.8MB, and the input/output equipment cannot meet the service C, and the service transmission device delays sending the service C to the terminal.
It should be noted that, in the case where the service transmission device temporarily transmits the target service to the terminal, the service transmission device acquires the service with the second highest priority in the service transmission queue as the target service, and executes the above steps again to determine whether to transmit a new target service to the terminal.
S403, if yes, the service transmission device sends the target service to the terminal.
Illustratively, in combination with the examples in S301 and S302 above, service C is a target service, and the service size of service C is 0.8MB. The service transmission device determines that the available read-write margin value of the input and output equipment is 8MB at the moment, the available network margin value of the input and output equipment is 20MB, compares the available read-write margin value of the input and output equipment with the service size of the service C of 0.8MB, compares the available network margin value of the input and output equipment with the service size of the service C of 0.8MB, determines that the available read-write margin value of the input and output equipment is 8MB and the available network margin value of the input and output equipment is 20MB, and is larger than the service size of the service C of 0.8MB, and feeds back the service C to the terminal.
For example, in combination with the above further example of S401, the available read/write margin value of the input/output device is 8mb and the available network margin value of the input/output device is 20mb, which are both greater than the service size of 0.8mb of the service C, then the service transmission apparatus feeds back the service C to the terminal.
The following is given. The overall process of how the service transmission apparatus reduces the load on the server will be described with reference to fig. 5:
s501, the service transmission device monitors the condition of the server, and if the load of the server reaches a preset threshold, the service transmission device starts a service buffer queue.
The specific implementation of S501 is similar to S201 described above, and the specific implementation process may refer to S201, which is not described herein.
S502, after the service buffer queue is started, the server buffers all the services into the service buffer queue, and the service transmission device determines the service buffer queue.
The specific implementation of S502 is similar to S201 described above, and the specific implementation process may refer to S201, which is not described herein.
S503, the service transmission device calculates the priority of each service in the service buffer queue.
The specific implementation of S503 is similar to S301 described above, and the specific implementation process may refer to S301, which is not described herein.
S504, the service transmission device determines the target service in the service buffer queue according to the priority of each service.
The specific implementation of S504 is similar to S302 described above, and the specific implementation process may refer to S302, which is not described herein.
S505, the service transmission device determines whether the available read-write margin value of the input/output device and the available network margin value of the input/output device are both larger than the service size of the target service.
The specific implementation of S505 is similar to S401 described above, and the specific implementation process may refer to S401, which is not described herein.
S506, if the available read-write margin value of the input/output equipment and the available network margin value of the input/output equipment are both larger than the service size of the target service, feeding back the target service to the terminal.
The specific implementation of S506 is similar to S403, and the specific implementation process may refer to S403, which is not described herein.
S507, if not, the service transmission device pauses to send the target service to the terminal; meanwhile, the service transmission apparatus acquires the target service having the second highest priority to execute the above-described S504-S506 again.
The specific implementation of S507 is similar to S402 described above, and the specific implementation process may refer to S402, which is not described herein.
S508, after each preset time or each time the target service is sent to the terminal, the service transmission device redetermines the priority of each service in the service buffer queue.
The specific implementation of S508 is similar to S202 described above, and the specific implementation process may refer to S202, which is not described herein.
Based on the technical scheme, the service transmission method provided by the application determines whether the available read-write margin value of the input/output equipment and the available network margin value of the input/output equipment are both larger than the service size of the target service or not through the service transmission device, feeds back the target service to the terminal when the available read-write margin value of the input/output equipment and the available network margin value of the input/output equipment are both larger than the service size of the target service, and if any item of service size smaller than the service size of the target service exists, the server cannot meet the target service, and at the moment, the target service is fed back to the terminal in a suspending mode, so that the condition that the server is fully loaded is avoided.
The embodiment of the present application may divide the functional modules or functional units of the service transmission apparatus according to the above method example, for example, each functional module or functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware, or in software functional modules or functional units. The division of the modules or units in the embodiments of the present application is merely a logic function division, and other division manners may be implemented in practice.
Fig. 6 is a schematic structural diagram of a service transmission device according to an embodiment of the present application, where the service transmission device includes: a processing unit 601 and a communication unit 602; a processing unit 601, configured to determine a service cache queue when a load of a server is greater than a preset threshold; the service cache queue is used for caching the service sent by the server to the terminal; the processing unit 601 is further configured to determine a target service in the service cache queue; the target service is a service with priority meeting preset conditions in a service cache queue; the processing unit 601 is further configured to determine whether a current available load of the server is greater than a service size of the target service; if yes, the communication unit 602 is configured to send the target service to the terminal.
Optionally, determining a target service in the service buffer queue, and the processing unit 601 is further configured to determine a priority of each service in the service buffer queue; wherein the priority is determined according to at least one of: the method comprises the steps of loading values of a server, service types of services, service sizes of the services and time lengths of the services in a cache queue; and taking the service with the highest priority in the service cache queue as a target service.
Optionally, determining the priority of each service in the service buffer queue, and the processing unit 601 is further configured to determine that the sum of the first weight value, the first parameter value, the second parameter value, the third parameter value, and the fourth parameter value of the i-th service in the service buffer queue is the priority of the i-th service; the first weight value is the weight value of the service type of the ith service; the first parameter value is the product of the first coefficient and the load value of the server; the second parameter value is the product of the second coefficient and the load value of the server; the third parameter value is the product of the third coefficient and the service size of the ith service; the fourth parameter value is the product of the fourth coefficient and the duration of the ith service in the service buffer queue.
Optionally, the load value of the server includes a read-write margin value and a network margin value of an input-output device of the server; determining whether the current available load of the server is greater than the traffic size of the target traffic includes: the processing unit 601 is further configured to determine whether an available read-write margin value of the input/output device and an available network margin value of the input/output device are both greater than a service size of the target service; sending the target service to the terminal, comprising: if not, the processing unit 601 temporarily sends the target service to the terminal; if yes, the communication unit 602 sends the target service to the terminal.
When implemented in hardware, the communication unit 602 in the embodiments of the present application may be integrated on a communication interface, and the processing unit 601 may be integrated on a processor. A specific implementation is shown in fig. 7.
Fig. 7 shows a further possible structural schematic diagram of the traffic transmission device involved in the above embodiment. The service transmission device comprises: a processor 702 and a communication interface 703. The processor 702 is configured to control and manage actions of the traffic transmission device, e.g., perform the steps performed by the processing unit 601 described above, and/or perform other processes of the techniques described herein. The communication interface 703 is used to support communication of the traffic transmission device with other network entities, for example, the traffic transmission device performing the steps performed by the above-mentioned communication unit 602 may further comprise a memory 701 and a bus 704, the memory 701 being used to store program codes and data of the traffic transmission device.
Wherein the memory 701 may be a memory in a traffic transmission device or the like, which may include a volatile memory, such as a random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, hard disk or solid state disk; the memory may also comprise a combination of the above types of memories.
The processor 702 may be implemented or executed with the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. The processor may be a central processing unit, a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. The processor may also be a combination that performs the function of a computation, e.g., a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, etc.
Bus 704 may be an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus or the like. The bus 704 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 7, but not only one bus or one type of bus.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above. The specific working processes of the above-described systems, devices and units may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.
The present application provides a computer program product comprising instructions which, when executed on a computer, cause the computer to perform the method of traffic transmission in the method embodiments described above.
The embodiment of the application also provides a computer readable storage medium, in which instructions are stored, which when executed on a computer, cause the computer to execute the service transmission method in the method flow shown in the method embodiment.
The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access Memory (Random Access Memory, RAM), a Read-Only Memory (ROM), an erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), a register, a hard disk, an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing, or any other form of computer readable storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application specific integrated circuit (Application Specific Integrated Circuit, ASIC). In the context of the present application, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Since the service transmission device, the computer readable storage medium and the computer program product in the embodiments of the present invention can be applied to the above-mentioned method, the technical effects that can be obtained by the method can also refer to the above-mentioned method embodiments, and the embodiments of the present invention are not described herein again.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the partitioning of elements is merely a logical functional partitioning, and there may be additional partitioning in actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not implemented. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interface, indirect coupling or communication connection of devices or units, electrical, mechanical, or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1.一种业务传输方法,其特征在于,所述方法包括:1. A service transmission method, characterized in that the method comprises: 在服务器的负载值大于预设阈值的情况下,确定业务缓存队列;所述业务缓存队列用于缓存所述服务器向终端发送的业务;When the load value of the server is greater than a preset threshold, determine a service cache queue; the service cache queue is used to cache services sent by the server to the terminal; 确定所述业务缓存队列中的各个业务的优先级;其中,所述优先级根据以下至少一项确定:所述服务器的负载值、所述业务的业务类型、所述业务的业务大小、以及所述业务在所述缓存队列中的时长;determining the priority of each service in the service cache queue; wherein the priority is determined according to at least one of the following: the load value of the server, the service type of the service, the service size of the service, and the The duration of the service in the cache queue; 将所述业务缓存队列中优先级最高的业务作为目标业务;Taking the service with the highest priority in the service cache queue as the target service; 确定服务器当前的可用负载是否大于所述目标业务的业务大小;Determine whether the current available load of the server is greater than the business size of the target business; 若是,则向终端发送所述目标业务。If yes, send the target service to the terminal. 2.根据权利要求1所述的方法,其特征在于,所述确定所述业务缓存队列中的各个业务的优先级,包括:2. The method according to claim 1, wherein the determining the priority of each service in the service cache queue comprises: 确定所述业务缓存队列中第i个业务的第一权重值、第一参数值、第二参数值、第三参数值、以及第四参数值之和为所述第i个业务的优先级;Determining the sum of the first weight value, the first parameter value, the second parameter value, the third parameter value, and the fourth parameter value of the i-th service in the service cache queue as the priority of the i-th service; 其中,所述第一权重值为所述第i个业务的业务类型的权重值;所述第一参数值为第一系数与所述服务器的负载值的乘积;所述第二参数值为第二系数与所述服务器的负载值的乘积;所述第三参数值为第三系数与所述第i个业务的业务大小的乘积;所述第四参数值为第四系数与所述第i个业务在所述业务缓存队列中存在时长的乘积。Wherein, the first weight value is the weight value of the business type of the ith business; the first parameter value is the product of the first coefficient and the load value of the server; the second parameter value is the first The product of the two coefficients and the load value of the server; the third parameter value is the product of the third coefficient and the business size of the i-th business; the fourth parameter value is the fourth coefficient and the i-th business The product of the length of time that a service exists in the service cache queue. 3.根据权利要求2所述的方法,其特征在于,所述业务缓存队列中第i个业务的优先级Wi满足以下:3. The method according to claim 2, wherein the priority Wi of the i-th business in the service cache queue satisfies the following: Wi=Ei+A*F+B*G+C*Ui+D*Vi W i =E i +A*F+B*G+C*U i +D*V i 其中,Ei表示为第i个业务的业务类型的权重值;A表示为第一系数;F表示为服务器的输入输出设备的读写裕度值;B表示为第二系数;G表示为服务器的输入输出设备的网络裕度值;C表示为第三系数;Ui表示为第i个业务的业务大小;D表示为第四系数;Vi表示为第i个业务缓存队列中存在时长;第一系数、第二系数、第三系数、第四系数为预设值。Among them, E i represents the weight value of the business type of the i-th business; A represents the first coefficient; F represents the read and write margin value of the input and output device of the server; B represents the second coefficient; G represents the server C represents the third coefficient; U i represents the business size of the i-th business; D represents the fourth coefficient; V i represents the duration of the i-th business cache queue; The first coefficient, the second coefficient, the third coefficient and the fourth coefficient are preset values. 4.根据权利要求1-3任一项所述的方法,其特征在于,所述服务器的负载值包括所述服务器的输入输出设备的读写裕度值和网络裕度值;4. The method according to any one of claims 1-3, wherein the load value of the server includes a read-write margin value and a network margin value of the input and output devices of the server; 所述确定所述服务器当前的可用负载是否大于所述目标业务的业务大小,包括:The determining whether the current available load of the server is greater than the business size of the target business includes: 确定所述输入输出设备的可用读写裕度值与所述输入输出设备的可用网络裕度值是否均大于所述目标业务的业务大小;determining whether the available read-write margin value of the input-output device and the available network margin value of the input-output device are both larger than the service size of the target service; 所述向终端发送所述目标业务,包括:The sending the target service to the terminal includes: 若否,则暂缓向终端发送所述目标业务;If not, suspend sending the target service to the terminal; 若是,则向终端发送所述目标业务。If yes, send the target service to the terminal. 5.一种业务传输装置,其特征在于,所述装置包括处理单元和通信单元;5. A service transmission device, characterized in that the device includes a processing unit and a communication unit; 所述处理单元,用于在服务器的负载大于预设阈值的情况下,确定业务缓存队列;所述业务缓存队列用于缓存所述服务器向终端发送的业务;The processing unit is configured to determine a service cache queue when the load of the server is greater than a preset threshold; the service cache queue is used to cache services sent by the server to the terminal; 所述处理单元,还用于确定所述业务缓存队列中的各个业务的优先级;其中,所述优先级根据以下至少一项确定:所述服务器的负载值、所述业务的业务类型、所述业务的业务大小、以及所述业务在所述缓存队列中的时长;The processing unit is further configured to determine the priority of each service in the service cache queue; wherein the priority is determined according to at least one of the following: the load value of the server, the service type of the service, the The business size of the business and the duration of the business in the cache queue; 所述处理单元,还用于将所述业务缓存队列中优先级最高的业务作为所述目标业务;The processing unit is further configured to use the service with the highest priority in the service buffer queue as the target service; 所述处理单元,还用于确定所述服务器当前的可用负载是否大于所述目标业务的业务大小;The processing unit is further configured to determine whether the current available load of the server is greater than the service size of the target service; 若是,则所述通信单元用于向终端发送所述目标业务。If yes, the communication unit is configured to send the target service to the terminal. 6.根据权利要求5所述的装置,其特征在于,所述确定所述业务缓存队列中的各个业务的优先级,包括:6. The device according to claim 5, wherein the determining the priority of each service in the service buffer queue comprises: 所述处理单元,还用于确定所述业务缓存队列中第i个业务的第一权重值、第一参数值、第二参数值、第三参数值、以及第四参数值之和为所述第i个业务的优先级;The processing unit is further configured to determine that the sum of the first weight value, the first parameter value, the second parameter value, the third parameter value, and the fourth parameter value of the i-th service in the service cache queue is the The priority of the i-th business; 其中,所述第一权重值为所述第i个业务的业务类型的权重值;所述第一参数值为第一系数与所述服务器的负载值的乘积;所述第二参数值为第二系数与所述服务器的负载值的乘积;所述第三参数值为第三系数与所述第i个业务的业务大小的乘积;所述第四参数值为第四系数与所述第i个业务在所述业务缓存队列中存在时长的乘积。Wherein, the first weight value is the weight value of the business type of the ith business; the first parameter value is the product of the first coefficient and the load value of the server; the second parameter value is the first The product of the two coefficients and the load value of the server; the third parameter value is the product of the third coefficient and the business size of the i-th business; the fourth parameter value is the fourth coefficient and the i-th business The product of the length of time that a service exists in the service cache queue. 7.根据权利要求6所述的装置,其特征在于,所述业务缓存队列中第i个业务的优先级Wi满足以下:7. The device according to claim 6, wherein the priority Wi of the i-th service in the service cache queue satisfies the following: Wi=Ei+A*F+B*G+C*Ui+D*Vi W i =E i +A*F+B*G+C*U i +D*V i 其中,Ei表示为第i个业务的业务类型的权重值;A表示为第一系数;F表示为服务器的输入输出设备的读写裕度值;B表示为第二系数;G表示为服务器的输入输出设备的网络裕度值;C表示为第三系数;Ui表示为第i个业务的业务大小;D表示为第四系数;Vi表示为第i个业务缓存队列中存在时长;第一系数、第二系数、第三系数、第四系数为预设值。Among them, E i represents the weight value of the business type of the i-th business; A represents the first coefficient; F represents the read and write margin value of the input and output device of the server; B represents the second coefficient; G represents the server C represents the third coefficient; U i represents the business size of the i-th business; D represents the fourth coefficient; V i represents the duration of the i-th business cache queue; The first coefficient, the second coefficient, the third coefficient and the fourth coefficient are preset values. 8.根据权利要求5-7任一项所述的装置,其特征在于,所述服务器的负载值包括所述服务器的输入输出设备的读写裕度值和网络裕度值;8. The device according to any one of claims 5-7, wherein the load value of the server includes a read-write margin value and a network margin value of the input and output devices of the server; 所述确定所述服务器当前的可用负载是否大于所述目标业务的业务大小,包括:The determining whether the current available load of the server is greater than the business size of the target business includes: 所述处理单元,还用于确定所述输入输出设备的可用读写裕度值与所述输入输出设备的可用网络裕度值中的均是否大于所述目标业务的业务大小;The processing unit is further configured to determine whether the available read-write margin value of the input-output device and the available network margin value of the input-output device are greater than the service size of the target service; 所述向终端发送所述目标业务,包括:The sending the target service to the terminal includes: 若否,则所述处理单元暂缓向终端发送所述目标业务;If not, the processing unit suspends sending the target service to the terminal; 若是,则所述通信单元向终端发送所述目标业务。If yes, the communication unit sends the target service to the terminal. 9.一种业务传输装置,其特征在于,包括:处理器和通信接口;所述通信接口和所述处理器耦合,所述处理器用于运行计算机程序或指令,以实现如权利要求1-4任一项中所述的业务传输方法。9. A service transmission device, characterized in that it comprises: a processor and a communication interface; the communication interface is coupled to the processor, and the processor is used to run computer programs or instructions, so as to implement claims 1-4 The service transmission method described in any one. 10.一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,其特征在于,当计算机执行该指令时,该计算机执行上述权利要求1-4任一项中所述的业务传输方法。10. A computer-readable storage medium, with instructions stored in the computer-readable storage medium, characterized in that, when the computer executes the instructions, the computer performs the business described in any one of claims 1-4 transfer method.
CN202211688302.3A 2022-12-27 2022-12-27 Service transmission method, device and storage medium Pending CN116170523A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211688302.3A CN116170523A (en) 2022-12-27 2022-12-27 Service transmission method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211688302.3A CN116170523A (en) 2022-12-27 2022-12-27 Service transmission method, device and storage medium

Publications (1)

Publication Number Publication Date
CN116170523A true CN116170523A (en) 2023-05-26

Family

ID=86410472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211688302.3A Pending CN116170523A (en) 2022-12-27 2022-12-27 Service transmission method, device and storage medium

Country Status (1)

Country Link
CN (1) CN116170523A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090077233A1 (en) * 2006-04-26 2009-03-19 Ryosuke Kurebayashi Load Control Device and Method Thereof
CN103747030A (en) * 2013-12-12 2014-04-23 浪潮电子信息产业股份有限公司 Nginx server intelligent cache method based on improved particle swarm optimization
CN107483976A (en) * 2017-09-26 2017-12-15 武汉斗鱼网络科技有限公司 Live management-control method, device and electronic equipment
CN112751912A (en) * 2020-12-15 2021-05-04 北京金山云网络技术有限公司 Configuration adjustment method and device and electronic equipment
CN113434793A (en) * 2021-06-03 2021-09-24 北京网瑞达科技有限公司 Smooth transition method and system based on WEB reverse proxy

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090077233A1 (en) * 2006-04-26 2009-03-19 Ryosuke Kurebayashi Load Control Device and Method Thereof
CN103747030A (en) * 2013-12-12 2014-04-23 浪潮电子信息产业股份有限公司 Nginx server intelligent cache method based on improved particle swarm optimization
CN107483976A (en) * 2017-09-26 2017-12-15 武汉斗鱼网络科技有限公司 Live management-control method, device and electronic equipment
CN112751912A (en) * 2020-12-15 2021-05-04 北京金山云网络技术有限公司 Configuration adjustment method and device and electronic equipment
CN113434793A (en) * 2021-06-03 2021-09-24 北京网瑞达科技有限公司 Smooth transition method and system based on WEB reverse proxy

Similar Documents

Publication Publication Date Title
US9882975B2 (en) Method and apparatus for buffering and obtaining resources, resource buffering system
US7945736B2 (en) Dynamic load management of network memory
US8248945B1 (en) System and method for Ethernet per priority pause packet flow control buffering
KR100506253B1 (en) Device and Method for minimizing transmission delay in data communication system
US20060112155A1 (en) System and method for managing quality of service for a storage system
JPWO2007125942A1 (en) Load control device and method thereof
CN112445857A (en) Resource quota management method and device based on database
CN112199309B (en) Data reading method and device based on DMA engine and data transmission system
CN114595043A (en) A kind of IO scheduling method and device
CN112600761A (en) Resource allocation method, device and storage medium
CN115176453B (en) Message caching method, memory allocator and message forwarding system
US8751750B2 (en) Cache device, data management method, program, and cache system
CN114866475B (en) Network-on-chip congestion control method, system, device and storage medium
EP4394573B1 (en) Data processing method and related device
CN116170523A (en) Service transmission method, device and storage medium
JP4394710B2 (en) Load control apparatus, method, and program
CN109951540A (en) A data transmission method, device and electronic device
US20130254268A1 (en) Method for streaming media and media controller
US10601444B2 (en) Information processing apparatus, information processing method, and recording medium storing program
CN117891779A (en) Access method and device of network file system, storage medium and electronic equipment
US9678922B2 (en) Data storage control system, data storage control method, and data storage control program
CA3163480A1 (en) Systems and methods for storing content items in secondary storage
US11811870B1 (en) Methods and systems for dynamically adjusting data chunk sizes copied over a network
KR102813876B1 (en) Apparatus and method for accelerating network transmission in memory disaggregation environment
CN120104069B (en) Data synchronization method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination