[go: up one dir, main page]

CN117812152A - Dynamic adjustment method and device for buffer size and maximum total number of request threads - Google Patents

Dynamic adjustment method and device for buffer size and maximum total number of request threads Download PDF

Info

Publication number
CN117812152A
CN117812152A CN202311531545.0A CN202311531545A CN117812152A CN 117812152 A CN117812152 A CN 117812152A CN 202311531545 A CN202311531545 A CN 202311531545A CN 117812152 A CN117812152 A CN 117812152A
Authority
CN
China
Prior art keywords
throughput
maximum
total number
response
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311531545.0A
Other languages
Chinese (zh)
Inventor
李向林
洪合俊
林志玮
黄志炜
张炜铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Meiya Pico Information Co Ltd
Original Assignee
Xiamen Meiya Pico Information Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Meiya Pico Information Co Ltd filed Critical Xiamen Meiya Pico Information Co Ltd
Priority to CN202311531545.0A priority Critical patent/CN117812152A/en
Publication of CN117812152A publication Critical patent/CN117812152A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • H04L43/0888Throughput
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/522Dynamic queue service slot or variable bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明公开了一种缓冲区大小及最大请求线程总数的动态调整方法及装置,包括:获取网络的第一吞吐量;当第一吞吐量未超过最大网络带宽值,且当前程序内存中存在一定的缓存数据,或者第一吞吐量已超过最大网络带宽值,则检测磁盘写入速率及对应内核的CPU利用率,当磁盘写入速率超过第一阈值或CPU利用率超过第二阈值,则将最大请求线程总数减少,若磁盘写入速率还超过第一阈值,则减小缓冲区大小;当第一吞吐量未超过最大网络带宽值,且当前程序内存中存在空数据,则检测CPU利用率,当最大请求线程总数未超过第三阈值或CPU利用率未超过第二阈值,则将最大请求线程总数增加;再获取网络的第二吞吐量,根据第二吞吐量增加缓冲区大小,提高带宽的利用率。

The present invention discloses a method and device for dynamically adjusting the buffer size and the maximum total number of request threads, comprising: obtaining a first throughput of a network; when the first throughput does not exceed the maximum network bandwidth value and there is certain cache data in the current program memory, or the first throughput has exceeded the maximum network bandwidth value, detecting a disk write rate and a CPU utilization rate of a corresponding kernel, when the disk write rate exceeds a first threshold value or the CPU utilization rate exceeds a second threshold value, reducing the maximum total number of request threads, and if the disk write rate still exceeds the first threshold value, reducing the buffer size; when the first throughput does not exceed the maximum network bandwidth value and there is empty data in the current program memory, detecting the CPU utilization rate, when the maximum total number of request threads does not exceed a third threshold value or the CPU utilization rate does not exceed the second threshold value, increasing the maximum total number of request threads; and then obtaining a second throughput of the network, increasing the buffer size according to the second throughput, and improving the bandwidth utilization rate.

Description

Dynamic adjustment method and device for buffer size and maximum request thread total number
Technical Field
The invention relates to the field of computers, in particular to a method and a device for dynamically adjusting the size of a buffer zone and the total number of maximum request threads.
Background
With the popularity of 5G technology and the ever-improving network infrastructure, network bandwidth speeds have indeed been faster than ever before. In many areas, people can enjoy high-speed optical fiber networks and mobile networks, which makes information transmission more rapid and efficient. In the future, as technology continues to advance, network bandwidth speeds will continue to increase. Meanwhile, with the continuous progress of computer memory, hardware capacity and processor core technology, computers can process large amounts of data and applications faster. The number of processor cores is also increasing, which allows the computer to handle more tasks at the same time. This also necessarily results in the forensics not being able to fix the remote server image from before to the now required faster and more efficient and more comprehensive retrieval of image data from the remote server.
Currently, many clustered servers and distributed systems are mostly composed of multiple servers, and it is common for the hardware capacity of the servers to exceed T-level data. This means that the forensics will take more time and effort to acquire and process this data from the remote server. To address this challenge, forensics need to build a sophisticated laboratory network environment and hardware equipment and use efficient data processing tools and algorithms to quickly analyze the data and extract evidence. However, in the process of fixing the remote server on the market, the remote server image data is usually fixed by a tool on the market, and the current network bandwidth environment and hardware resources are not fully utilized to adjust the size of the data downloaded to the local fixing in real time. This results in the inability of the forensics person to maximize the use of network bandwidth and hardware resources to increase forensics efficiency, and may result in the loss or incompleteness of data during the forensics process.
Most of the fixed remote server mirror tools on the market at present preset the size of a buffer zone according to the actual condition of the current equipment before remote data fixing to improve the network mirror making capability, and the following defects exist in manual adjustment of the size of the buffer zone:
1. the buffer area cannot be adjusted in real time in the process of remote fixed mirroring according to the actual condition of the equipment, so that hardware resources can not be fully utilized;
2. monitoring by a third party tool in the process of fixed mirroring, and then manually modifying the configuration file to adjust the size of a data block requested by the server;
3. when the number of the CPU cores of the partial machine is smaller, the phenomenon that the program is crashed, data packets are lost and the like can occur in the process that the program cannot normally run or the request is caused by the unreasonable buffer zone size;
4. the real-time adjustment of the network configuration file by human judgment in the actual fixed mirroring process may not be very accurate, and multiple observations and adjustments are required, which also inevitably results in a great deal of time waste.
Disclosure of Invention
The technical problems mentioned above are solved. An objective of the embodiments of the present application is to provide a method and an apparatus for dynamically adjusting a buffer size and a maximum request thread count, so as to solve the technical problems mentioned in the background section.
In a first aspect, the present invention provides a method for dynamically adjusting a buffer size and a maximum total number of request threads, including the steps of:
acquiring the size of a buffer zone of a preset request end and the total number of maximum request threads, detecting the maximum network bandwidth value from the request end to a target server, and acquiring the first throughput of a network;
in response to determining that the first throughput does not exceed the maximum network bandwidth value and that the current program memory has cache data exceeding a preset amount, or in response to determining that the first throughput has exceeded the maximum network bandwidth value, detecting a disk write rate of the request end and a CPU utilization rate of a corresponding kernel, in response to determining that the disk write rate exceeds a first threshold or the CPU utilization rate of the corresponding kernel exceeds a second threshold, reducing the total number of maximum request threads of the corresponding kernel, and in response to determining that the disk write rate still exceeds the first threshold, reducing the buffer size of the request end and the current program memory size;
detecting the CPU utilization rate of the corresponding kernel if the first throughput does not exceed the maximum network bandwidth value and the cache data exceeding the preset quantity does not exist in the current program memory, and increasing the maximum request thread total number of the corresponding kernel if the maximum request thread total number does not exceed a third threshold or the CPU utilization rate of the corresponding kernel does not exceed a second threshold;
acquiring a second throughput of the network again, and increasing the buffer area size of the request end and the current program memory size in response to determining that the second throughput does not exceed the maximum network bandwidth value;
and then obtaining a third throughput of the network, and ending the adjustment in response to determining that the third throughput has exceeded the maximum network bandwidth value.
Preferably, the method further comprises:
a third throughput of the network is obtained in response to determining that the disk write rate does not exceed the first threshold or that the second throughput exceeds the maximum network bandwidth value, and the adjusting is ended in response to determining that the third throughput has exceeded the maximum network bandwidth value.
Preferably, the method further comprises:
in response to determining that the third throughput does not exceed the maximum network bandwidth value, the maximum number of requesting threads is increased.
Preferably, the first throughput, the second throughput and the third throughput are all obtained by direct statistics on the network card of the request end.
Preferably, the current program memory size is the sum of a plurality of buffer sizes inside the current program memory size, and in response to no preset maximum value of the current program memory size exists, the maximum value of the current program memory size is determined through a dump mechanism.
In a second aspect, the present invention provides a device for dynamically adjusting a buffer size and a maximum total number of request threads, including:
the data acquisition module is configured to acquire the size of a buffer zone of a preset request end and the total number of maximum request threads, detect the maximum network bandwidth value from the request end to a target server, and acquire the first throughput of a network;
the first adjusting module is configured to detect the disk writing rate of the request end and the CPU utilization rate of the corresponding kernel in response to determining that the first throughput does not exceed the maximum network bandwidth value and that the current program memory has cache data exceeding the preset quantity, or to reduce the maximum request thread total number of the corresponding kernel in response to determining that the disk writing rate exceeds a first threshold or the CPU utilization rate of the corresponding kernel exceeds a second threshold;
the second adjusting module is configured to detect the CPU utilization rate of the corresponding kernel in response to the fact that the first throughput does not exceed the maximum network bandwidth value and the cache data exceeding the preset quantity does not exist in the current program memory, and increase the maximum request thread total number of the corresponding kernel in response to the fact that the maximum request thread total number does not exceed a third threshold or the CPU utilization rate of the corresponding kernel does not exceed a second threshold;
the third adjusting module is configured to acquire the second throughput of the network again, and adjust the buffer size of the request end and the current program memory size in response to determining that the second throughput does not exceed the maximum network bandwidth value;
an end adjustment module configured to then acquire a third throughput of the network, and in response to determining that the third throughput has exceeded the maximum network bandwidth value, end the adjustment.
In a third aspect, the present invention provides an electronic device comprising one or more processors; and storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method as described in any of the implementations of the first aspect.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as described in any of the implementations of the first aspect.
Compared with the prior art, the invention has the following beneficial effects:
(1) The method for dynamically adjusting the size of the buffer zone and the total number of the maximum request threads provided by the invention can maximize the utilization of local equipment resources and network bandwidth processing capacity by presetting a dynamic data adjustment process and detecting the network bandwidth processing capacity in real time, thereby improving the efficiency of remote data fixation.
(2) The method for dynamically adjusting the size of the buffer zone and the total number of the maximum request threads provided by the invention dynamically adjusts the size of the buffer zone and the total number of the maximum request threads according to the current equipment resource environment by dynamically analyzing the IO use condition of the current disk, the current program memory use rate and the throughput of the current network, and simultaneously combines the application of the actual combat process to the relevant experience value of the network transmission data, thereby improving the better effect of fixing the remote server data in the electronic data evidence obtaining industry.
(3) The method for dynamically adjusting the size of the buffer zone and the total number of the maximum request threads is not only suitable for remote mirror image fixed evidence collection, thereby improving the remote mirror image fixed efficiency, but also suitable for being applied to actual combat scenes such as read-write separation scenes, fragment transmission, data compression transmission and the like.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is an exemplary device frame pattern to which an embodiment of the present application may be applied;
FIG. 2 is a flow chart of a method for dynamically adjusting the buffer size and the maximum number of request threads according to an embodiment of the present application;
FIG. 3 is a logic diagram of a method for dynamically adjusting buffer size and maximum request thread count in accordance with an embodiment of the present application;
FIG. 4 is a diagram showing the result of a method for dynamically adjusting the buffer size and the maximum number of threads required according to an embodiment of the present application;
FIG. 5 is a diagram illustrating the result of a dynamic adjustment method for buffer size and maximum request thread count without employing an embodiment of the present application;
FIG. 6 is a diagram of a dynamic adjustment device for buffer size and maximum request thread count according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a computer device suitable for use in implementing the embodiments of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
FIG. 1 illustrates an exemplary device architecture 100 in which a method of dynamically adjusting buffer size and maximum number of request threads or a device for dynamically adjusting buffer size and maximum number of request threads of embodiments of the present application may be applied.
As shown in fig. 1, the apparatus architecture 100 may include a first terminal device 101, a second terminal device 102, a third terminal device 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the first terminal device 101, the second terminal device 102, the third terminal device 103, and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the first terminal device 101, the second terminal device 102, the third terminal device 103, to receive or send messages, etc. Various applications, such as a data processing class application, a file processing class application, and the like, may be installed on the terminal device one 101, the terminal device two 102, and the terminal device three 103.
The first terminal device 101, the second terminal device 102 and the third terminal device 103 may be hardware or software. When the first terminal device 101, the second terminal device 102, and the third terminal device 103 are hardware, they may be various electronic devices, including but not limited to smart phones, tablet computers, laptop computers, desktop computers, and the like. When the first terminal apparatus 101, the second terminal apparatus 102, and the third terminal apparatus 103 are software, they can be installed in the above-listed electronic apparatuses. Which may be implemented as multiple software or software modules (e.g., software or software modules for providing distributed services) or as a single software or software module. The present invention is not particularly limited herein.
The server 105 may be a server that provides various services, such as a background data processing server that processes files or data uploaded by the terminal device one 101, the terminal device two 102, and the terminal device three 103. The background data processing server can process the acquired file or data to generate a processing result.
It should be noted that, the method for dynamically adjusting the buffer size and the total number of the maximum request threads provided in the embodiments of the present application may be executed by the server 105, or may be executed by the first terminal device 101, the second terminal device 102, or the third terminal device 103, and correspondingly, the dynamic adjusting device for the buffer size and the total number of the maximum request threads may be set in the server 105, or may be set in the first terminal device 101, the second terminal device 102, or the third terminal device 103.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In the case where the processed data does not need to be acquired from a remote location, the above-described apparatus architecture may not include a network, but only a server or terminal device.
Fig. 2 shows a method for dynamically adjusting the buffer size and the total number of maximum request threads according to an embodiment of the present application, including the following steps:
s1, acquiring the size of a buffer zone and the total number of maximum request threads of a preset request end, detecting the maximum network bandwidth value from the request end to a target server, and acquiring the first throughput of a network.
In a specific embodiment, the first throughput, the second throughput and the third throughput are all obtained by direct statistics on the network card of the request end.
Specifically, taking a remote mirror image fixed data process as an example, specific steps of a dynamic adjustment method for the size of a buffer area and the total number of maximum request threads, which are provided by the embodiment of the application, are described, and the scheme is also suitable for being applied to actual combat scenes such as read-write separation scenes, fragment transmission, data compression transmission and the like.
Referring to fig. 3, when fixing data to remote mirror image, the buffer size and the maximum number of request threads are preset according to different programming languages, and the buffer size and the corresponding maximum number of request threads are preset according to different operating systems by combining actual combat experience. According to the equipment operated by the current request end, the maximum network bandwidth value from the request end to the target server is detected through basic socket network communication, and the first throughput of the current network is directly counted on the network card, and the first throughput is directly counted on the network card without counting on a kernel layer, so that the kernel pressure can be reduced, and a large number of IO copies can be reduced.
S2, in response to determining that the first throughput does not exceed the maximum network bandwidth value and that the current program memory has the cached data exceeding the preset quantity, or in response to determining that the first throughput does exceed the maximum network bandwidth value, detecting the disk writing rate of the request end and the CPU utilization rate of the corresponding kernel, in response to determining that the disk writing rate exceeds a first threshold or the CPU utilization rate of the corresponding kernel exceeds a second threshold, reducing the total number of the maximum request threads of the corresponding kernel, and in response to the disk writing rate still exceeds the first threshold, reducing the buffer area size of the request end and the current program memory size.
In particular, during the course of fixing the data,if the first throughput does not exceed the maximum network bandwidth value and the current program memory has more than the preset amount of cache data, the preset amount of cache data is 2 n The size of each buffer, and each buffer size suggestion is smaller than the system memory size. In one embodiment, if certain cache data still exists in the current program memory and the size of the certain cache data is larger than the size of 2 buffer areas, the current disk IO processing capability is detected again, where the disk IO processing capability is measured by a disk writing rate, if the disk writing rate exceeds a first threshold, for example, the disk writing rate is preset to be 90% of the maximum value according to practical experience, or the CPU utilization of the corresponding kernel exceeds 80%, the maximum request thread total number is reduced appropriately, that is, the maximum request thread total number is reduced, so that the disk blocking phenomenon can be reduced, no resource fragment requests a server side, the disk blocking phenomenon cannot be checked at a task manager, and the CPU occupation is very high. The step is a process of dynamically adjusting based on the IO processing capability of the disk and the CPU load capability, if the disk writing rate exceeds a first threshold value or the CPU resource occupation is particularly high, the total number of the maximum request threads is reduced in a certain way, for example, the reduced total number of the maximum request threads is the preset total number of the maximum request threads divided by 2, so that the processing capability of the current system is reduced, and the current system is monitored in real time and judged in real time. And further judging whether the disk writing rate still exceeds the first threshold value, and if the disk writing rate still exceeds the first threshold value, reducing the buffer area size of the request end and the current program memory size.
And S3, detecting the CPU utilization rate of the corresponding kernel in response to the fact that the first throughput does not exceed the maximum network bandwidth value and the cache data exceeding the preset quantity does not exist in the current program memory, and increasing the maximum request thread total number of the corresponding kernel in response to the fact that the maximum request thread total number does not exceed a third threshold or the CPU utilization rate of the corresponding kernel does not exceed a second threshold.
Specifically, if the first throughput does not exceed the maximum network bandwidth value and there is no buffered data exceeding the preset amount in the current program memory, that is, there is empty data in the current program memory, that is, the data size of the memory is smaller than the size of the 2 buffers, the data is stored in the current program memory according to the specified data structure, so as to determine whether the data in the current program memory is empty. Detecting the CPU utilization rate of the corresponding kernel again, and if the CPU utilization rate of the corresponding kernel does not exceed a second threshold and the total number of the maximum request threads does not exceed a third threshold, wherein the third threshold defaults to 8 times of the number of the kernels, properly increasing the total number of the maximum request threads on the basis of the preset total number of the maximum request threads, namely increasing the total number of the maximum request threads.
And S4, acquiring the second throughput of the network again, and increasing the buffer area size of the request end and the current program memory size in response to determining that the second throughput does not exceed the maximum network bandwidth value.
In a specific embodiment, the current program memory size is the sum of a plurality of buffer sizes inside the current program memory size, and in response to no preset maximum value of the current program memory size exists, the maximum value of the current program memory size is determined through a dump mechanism.
Specifically, after the process of increasing the total number of the maximum request threads is completed, the second throughput of the network is acquired again, whether the second throughput exceeds the maximum network bandwidth value is judged, if the second throughput does not exceed the maximum network bandwidth value, the buffer area size of the request end and the current program memory size are increased, the current running data processing total current program memory size is composed of a plurality of buffer areas, each buffer area is constructed by a designated data structure, and therefore, after the buffer area size is adjusted, the current program memory size is changed. The maximum value of the most reasonable current program memory size is obtained by specifying the maximum value of the current program memory size of the pre-judgment and combining a dump mechanism, and the program crash cannot be caused by adjusting the size of the buffer area. The dump mechanism is a condition that the current program memory size exceeds a certain threshold value and the process cannot normally run. Where the current program memory size can generally be predetermined to obtain its maximum value. The partial system generally presets a maximum value of the current program memory size, if the maximum value of the current program memory size is not preset, the maximum value of the current program memory size is prejudged through a dump mechanism, so that a most suitable maximum value of the current program memory size is selected, and the buffer area size and the current program memory size adjusting process in the step are limited according to the most suitable maximum value of the current program memory size. The dynamic adjustment of the buffer size and the maximum number of requesting threads is thus completed at this point.
And S5, acquiring third throughput of the network, and ending adjustment in response to determining that the third throughput exceeds the maximum network bandwidth value.
In a specific embodiment, the method further comprises:
a third throughput of the network is obtained in response to determining that the disk write rate does not exceed the first threshold or that the second throughput exceeds the maximum network bandwidth value, and the adjusting is ended in response to determining that the third throughput has exceeded the maximum network bandwidth value.
In a specific embodiment, the method further comprises:
in response to determining that the third throughput does not exceed the maximum network bandwidth value, the maximum number of requesting threads is increased.
Specifically, after the above dynamic adjustment process of the buffer size and the total number of the maximum request threads, if the disk write rate does not exceed the first threshold value or the second throughput exceeds the maximum network bandwidth value, three aspects of the current CPU core processing capability, the disk IO processing capability and the memory adjustment rationality need to be analyzed again to obtain the third throughput of the current network, and whether the third throughput exceeds the maximum network bandwidth value is judged, if yes, the adjustment is ended, otherwise, the step of increasing the total number of the maximum request threads in step S3 is repeated to step S5, and parameters are adjusted again to exceed the maximum network bandwidth value of the network, thereby improving the capability of remote mirror image manufacturing.
If the throughput of the current device in the network card has larger fluctuation with the maximum network bandwidth value, the steps S2 to S5 are repeated at regular time, so that the maximum throughput of the local device resource and the network bandwidth is adjusted in real time.
The above steps S1-S5 do not merely represent the order between steps, but rather are step notations.
The effect of the dynamic adjustment method of the buffer size and the total number of the maximum request threads in practical application is shown in table 1, fig. 4 and fig. 5, fig. 4 shows the effect of applying the dynamic adjustment method of the buffer size and the total number of the maximum request threads in the cloud evidence obtaining workstation DC-5900, and compared with the existing manual adjustment method of the buffer size and the total number of the maximum request threads, the dynamic adjustment method of the buffer size and the total number of the maximum request threads in the embodiment of the application is adopted to enable the flow of the whole network transmission whole input and output to be relatively stable (red frame part), the whole mirror image manufacturing efficiency speed is relatively high, and the maximum bandwidth utilization rate is not fully exceeded at present, and can be further optimized and adjusted to improve the maximum bandwidth utilization rate in continuous optimization.
TABLE 1
With further reference to fig. 6, as an implementation of the method shown in the foregoing figures, the present application provides an embodiment of a dynamic adjustment apparatus for a buffer size and a maximum request thread total number, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
The embodiment of the application provides a dynamic adjustment device for buffer size and maximum request thread total number, comprising:
the data acquisition module 1 is configured to acquire the size of a buffer zone of a preset request end and the total number of maximum request threads, detect the maximum network bandwidth value from the request end to a target server, and acquire the first throughput of a network;
the first adjusting module 2 is configured to detect the disk writing rate of the request end and the CPU utilization rate of the corresponding kernel in response to determining that the first throughput does not exceed the maximum network bandwidth value and that the current program memory has cache data exceeding a preset amount, or to reduce the maximum number of request threads of the corresponding kernel in response to determining that the disk writing rate exceeds a first threshold or the CPU utilization rate of the corresponding kernel exceeds a second threshold in response to determining that the disk writing rate still exceeds the first threshold and to reduce the buffer size of the request end and the current program memory size in response to determining that the disk writing rate still exceeds the first threshold;
the second adjusting module 3 is configured to detect the CPU utilization of the corresponding kernel in response to determining that the first throughput does not exceed the maximum network bandwidth value and that the current program memory does not have cache data exceeding the preset amount, and increase the maximum request thread total number of the corresponding kernel in response to determining that the maximum request thread total number does not exceed the third threshold or the CPU utilization of the corresponding kernel does not exceed the second threshold;
a third adjustment module 4, configured to acquire the second throughput of the network again, and in response to determining that the second throughput does not exceed the maximum network bandwidth value, increase the buffer size of the request end and the current program memory size;
an end adjustment module 5 configured to then acquire a third throughput of the network, and in response to determining that the third throughput has exceeded the maximum network bandwidth value, end the adjustment.
Referring now to fig. 7, there is illustrated a schematic diagram of a computer apparatus 700 suitable for use in implementing an electronic device (e.g., a server or terminal device as illustrated in fig. 1) of an embodiment of the present application. The electronic device shown in fig. 7 is only an example and should not impose any limitation on the functionality and scope of use of the embodiments of the present application.
As shown in fig. 7, the computer apparatus 700 includes a Central Processing Unit (CPU) 701 and a Graphics Processor (GPU) 702, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 703 or a program loaded from a storage section 709 into a Random Access Memory (RAM) 704. In the RAM 704, various programs and data required for the operation of the apparatus 700 are also stored. The CPU 701, the GPU702, the ROM 703, and the RAM 704 are connected to each other through a bus 705. An input/output (I/O) interface 706 is also connected to the bus 705.
The following components are connected to the I/O interface 706: an input section 707 including a keyboard, a mouse, and the like; an output portion 708 including a speaker, such as a Liquid Crystal Display (LCD), or the like; a storage section 709 including a hard disk or the like; and a communication section 710 including a network interface card such as a LAN card, a modem, and the like. The communication section 710 performs communication processing via a network such as the internet. The drives 711 may also be connected to the I/O interfaces 706 as needed. A removable medium 712 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on the drive 711, so that a computer program read out therefrom is installed into the storage section 709 as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such embodiments, the computer program may be downloaded and installed from a network via the communication portion 710, and/or installed from the removable media 712. The above-described functions defined in the method of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 701 and a Graphics Processor (GPU) 702.
It should be noted that the computer readable medium described in the present application may be a computer readable signal medium or a computer readable medium, or any combination of the two. The computer readable medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor apparatus, device, or means, or a combination of any of the foregoing. More specific examples of the computer-readable medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution apparatus, device, or apparatus. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may be any computer readable medium that is not a computer readable medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution apparatus, device, or apparatus. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or it may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based devices which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments described in the present application may be implemented by software, or may be implemented by hardware. The described modules may also be provided in a processor.
As another aspect, the present application also provides a computer-readable medium that may be contained in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring the size of a buffer zone of a preset request end and the total number of maximum request threads, detecting the maximum network bandwidth value from the request end to a target server, and acquiring the first throughput of a network; in response to determining that the first throughput does not exceed the maximum network bandwidth value and that the current program memory has cache data exceeding a preset amount, or in response to determining that the first throughput has exceeded the maximum network bandwidth value, detecting a disk write rate of the request end and a CPU utilization rate of a corresponding kernel, in response to determining that the disk write rate exceeds a first threshold or the CPU utilization rate of the corresponding kernel exceeds a second threshold, reducing the total number of maximum request threads of the corresponding kernel, and in response to determining that the disk write rate still exceeds the first threshold, reducing the buffer size of the request end and the current program memory size; detecting the CPU utilization rate of the corresponding kernel if the first throughput does not exceed the maximum network bandwidth value and the cache data exceeding the preset quantity does not exist in the current program memory, and increasing the maximum request thread total number of the corresponding kernel if the maximum request thread total number does not exceed a third threshold or the CPU utilization rate of the corresponding kernel does not exceed a second threshold; acquiring a second throughput of the network again, and increasing the buffer area size of the request end and the current program memory size in response to determining that the second throughput does not exceed the maximum network bandwidth value; and then obtaining a third throughput of the network, and ending the adjustment in response to determining that the third throughput has exceeded the maximum network bandwidth value.
The foregoing description is only of the preferred embodiments of the present application and is presented as a description of the principles of the technology being utilized. It will be appreciated by persons skilled in the art that the scope of the invention referred to in this application is not limited to the specific combinations of features described above, but it is intended to cover other embodiments in which any combination of features described above or equivalents thereof is possible without departing from the spirit of the invention. Such as the above-described features and technical features having similar functions (but not limited to) disclosed in the present application are replaced with each other.

Claims (8)

1.一种缓冲区大小及最大请求线程总数的动态调整方法,其特征在于,包括以下步骤:1. A method for dynamically adjusting the buffer size and the maximum total number of request threads, which is characterized by including the following steps: 获取预设的请求端的缓冲区大小和最大请求线程总数,检测请求端到目标服务器的最大网络带宽值,获取网络的第一吞吐量;Obtain the preset buffer size and maximum total number of request threads on the request side, detect the maximum network bandwidth value from the request side to the target server, and obtain the first throughput of the network; 响应于确定所述第一吞吐量未超过所述最大网络带宽值,且当前程序内存中存在超过预设数量的缓存数据,或者响应于确定所述第一吞吐量已超过所述最大网络带宽值,则检测所述请求端的磁盘写入速率及对应内核的CPU利用率,响应于确定所述磁盘写入速率超过第一阈值或对应内核的CPU利用率超过第二阈值,则将所述对应内核的最大请求线程总数进行减少,响应于所述磁盘写入速率仍超过第一阈值,则减小所述请求端的缓冲区大小及当前程序内存大小;In response to determining that the first throughput does not exceed the maximum network bandwidth value and there is more than a preset amount of cached data in the current program memory, or in response to determining that the first throughput has exceeded the maximum network bandwidth value , then the disk writing rate of the requesting end and the CPU utilization of the corresponding core are detected. In response to determining that the disk writing rate exceeds the first threshold or the CPU utilization of the corresponding core exceeds the second threshold, the corresponding core is The maximum total number of request threads is reduced, and in response to the disk writing rate still exceeding the first threshold, the buffer size of the requesting end and the current program memory size are reduced; 响应于确定所述第一吞吐量未超过所述最大网络带宽值,且当前程序内存中不存在超过预设数量的缓存数据,则检测对应内核的CPU利用率,响应于所述最大请求线程总数未超过第三阈值或对应内核的CPU利用率未超过第二阈值,则将所述对应内核的最大请求线程总数进行增加;In response to determining that the first throughput does not exceed the maximum network bandwidth value, and there is no cache data exceeding a preset amount in the current program memory, detecting the CPU utilization of the corresponding core, and in response to the maximum total number of request threads not exceeding a third threshold or the CPU utilization of the corresponding core not exceeding a second threshold, increasing the maximum total number of request threads of the corresponding core; 再次获取网络的第二吞吐量,响应于确定所述第二吞吐量未超过所述最大网络带宽值,则增大所述请求端的缓冲区大小及当前程序内存大小;Obtain the second throughput of the network again, and in response to determining that the second throughput does not exceed the maximum network bandwidth value, increase the buffer size of the requesting end and the current program memory size; 而后获取网络的第三吞吐量,响应于确定所述第三吞吐量已超过所述最大网络带宽值,则结束调整。Then the third throughput of the network is obtained, and in response to determining that the third throughput has exceeded the maximum network bandwidth value, the adjustment is ended. 2.根据权利要求1所述的缓冲区大小及最大请求线程总数的动态调整方法,其特征在于,还包括:2. The method for dynamically adjusting the buffer size and the maximum total number of request threads according to claim 1, characterized in that it also includes: 响应于确定所述磁盘写入速率未超过第一阈值,或者所述第二吞吐量超过所述最大网络带宽值,则获取网络的第三吞吐量,响应于确定所述第三吞吐量已超过所述最大网络带宽值,则结束调整。In response to determining that the disk write rate does not exceed the first threshold, or that the second throughput exceeds the maximum network bandwidth value, a third throughput of the network is obtained, in response to determining that the third throughput has exceeded When the maximum network bandwidth value is reached, the adjustment ends. 3.根据权利要求1所述的缓冲区大小及最大请求线程总数的动态调整方法,其特征在于,还包括:3. The method for dynamically adjusting the buffer size and the maximum total number of request threads according to claim 1, characterized in that it also includes: 响应于确定所述第三吞吐量未超过最大网络带宽值,则将所述最大请求线程总数进行增加。In response to determining that the third throughput does not exceed the maximum network bandwidth value, the maximum total number of request threads is increased. 4.根据权利要求1所述的缓冲区大小及最大请求线程总数的动态调整方法,其特征在于,所述第一吞吐量、第二吞吐量和第三吞吐量均在所述请求端的网卡上直接统计得到。4. The method for dynamically adjusting the buffer size and the maximum total number of request threads according to claim 1, characterized in that the first throughput, the second throughput and the third throughput are all on the network card of the requesting end. directly counted. 5.根据权利要求1所述的缓冲区大小及最大请求线程总数的动态调整方法,其特征在于,所述当前程序内存大小为其内部的若干个缓冲区大小之和,响应于不存在预设的当前程序内存大小的最大值,则通过dump机制确定当前程序内存大小的最大值。5. The method for dynamically adjusting the buffer size and the maximum total number of request threads according to claim 1, characterized in that the current program memory size is the sum of the sizes of several internal buffers, and in response to the absence of a preset If the maximum value of the current program memory size is determined, the maximum value of the current program memory size is determined through the dump mechanism. 6.一种缓冲区大小及最大请求线程总数的动态调整装置,其特征在于,包括:6. A device for dynamically adjusting the buffer size and the maximum number of request threads, comprising: 数据获取模块,被配置为获取预设的请求端的缓冲区大小和最大请求线程总数,检测请求端到目标服务器的最大网络带宽值,获取网络的第一吞吐量;The data acquisition module is configured to obtain the preset buffer size of the request end and the maximum total number of request threads, detect the maximum network bandwidth value from the request end to the target server, and obtain the first throughput of the network; 第一调整模块,被配置为响应于确定所述第一吞吐量未超过所述最大网络带宽值,且当前程序内存中存在超过预设数量的缓存数据,或者响应于确定所述第一吞吐量已超过所述最大网络带宽值,则检测所述请求端的磁盘写入速率及对应内核的CPU利用率,响应于确定所述磁盘写入速率超过第一阈值或对应内核的CPU利用率超过第二阈值,则将所述对应内核的最大请求线程总数进行减少,响应于所述磁盘写入速率仍超过第一阈值,则减小所述请求端的缓冲区大小及当前程序内存大小;A first adjustment module configured to respond to determining that the first throughput does not exceed the maximum network bandwidth value and there is more than a preset amount of cached data in the current program memory, or to respond to determining that the first throughput has exceeded the maximum network bandwidth value, then detect the disk writing rate of the requesting end and the CPU utilization of the corresponding core, in response to determining that the disk writing rate exceeds the first threshold or the CPU utilization of the corresponding core exceeds the second threshold, then reduce the maximum total number of request threads of the corresponding core, and in response to the disk write rate still exceeding the first threshold, reduce the buffer size of the request side and the current program memory size; 第二调整模块,被配置为响应于确定所述第一吞吐量未超过所述最大网络带宽值,且当前程序内存中不存在超过预设数量的缓存数据,则检测对应内核的CPU利用率,响应于所述最大请求线程总数未超过第三阈值或对应内核的CPU利用率未超过第二阈值,则将所述对应内核的最大请求线程总数进行增加;The second adjustment module is configured to detect the CPU utilization of the corresponding core in response to determining that the first throughput does not exceed the maximum network bandwidth value and there is no cache data exceeding a preset amount in the current program memory, In response to the maximum total number of request threads not exceeding the third threshold or the CPU utilization of the corresponding core not exceeding the second threshold, increase the maximum total number of request threads of the corresponding core; 第三调整模块,被配置为再次获取网络的第二吞吐量,响应于确定所述第二吞吐量未超过所述最大网络带宽值,则增大所述请求端的缓冲区大小及当前程序内存大小;The third adjustment module is configured to obtain the second throughput of the network again, and in response to determining that the second throughput does not exceed the maximum network bandwidth value, increase the buffer size of the requesting end and the current program memory size. ; 结束调整模块,被配置为而后获取网络的第三吞吐量,响应于确定所述第三吞吐量已超过所述最大网络带宽值,则结束调整。The end adjustment module is configured to then obtain a third throughput of the network, and end the adjustment in response to determining that the third throughput has exceeded the maximum network bandwidth value. 7.一种电子设备,包括:7. An electronic device, including: 一个或多个处理器;one or more processors; 存储装置,用于存储一个或多个程序,a storage device for storing one or more programs, 当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-5中任一所述的方法。When the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any one of claims 1-5. 8.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-5中任一所述的方法。8. A computer-readable storage medium with a computer program stored thereon, characterized in that when the program is executed by a processor, the method according to any one of claims 1-5 is implemented.
CN202311531545.0A 2023-11-16 2023-11-16 Dynamic adjustment method and device for buffer size and maximum total number of request threads Pending CN117812152A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311531545.0A CN117812152A (en) 2023-11-16 2023-11-16 Dynamic adjustment method and device for buffer size and maximum total number of request threads

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311531545.0A CN117812152A (en) 2023-11-16 2023-11-16 Dynamic adjustment method and device for buffer size and maximum total number of request threads

Publications (1)

Publication Number Publication Date
CN117812152A true CN117812152A (en) 2024-04-02

Family

ID=90430832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311531545.0A Pending CN117812152A (en) 2023-11-16 2023-11-16 Dynamic adjustment method and device for buffer size and maximum total number of request threads

Country Status (1)

Country Link
CN (1) CN117812152A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN121056424A (en) * 2025-10-31 2025-12-02 广州致远电子股份有限公司 CAN frame data transmission method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN121056424A (en) * 2025-10-31 2025-12-02 广州致远电子股份有限公司 CAN frame data transmission method, device, equipment and storage medium
CN121056424B (en) * 2025-10-31 2026-02-10 广州致远电子股份有限公司 CAN frame data transmission method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN105281981B (en) Data flow monitoring method and device for network service
EP1662441B1 (en) System and method of optimizing graphics processing using tessellation
US9537926B1 (en) Network page latency reduction
US9317427B2 (en) Reallocating unused memory databus utilization to another processor when utilization is below a threshold
US10956214B2 (en) Time frame bounded execution of computational algorithms
CN112506619A (en) Job processing method, apparatus, electronic device, storage medium, and program product
CN110795284A (en) A data recovery method, apparatus, device and readable storage medium
CN117812152A (en) Dynamic adjustment method and device for buffer size and maximum total number of request threads
JP2024512476A (en) Reducing bandwidth consumption with generative adversarial networks
Wang et al. Smarteye: An open source framework for real-time video analytics with edge-cloud collaboration
CN113971200B (en) A map service traffic recording system and method for a cloud native platform
WO2020076394A1 (en) Resource allocation using restore credits
CN112579282B (en) Data processing method, device, system, and computer-readable storage medium
US9342460B2 (en) I/O write request handling in a storage system
CN119106704A (en) Preprocessing methods, devices, equipment, media and products for graph neural networks
CN117873404A (en) A hard disk image storage optimization method and system based on machine vision multi-camera
CN113553372B (en) Method, device, computing equipment and medium for writing to database
US9466042B2 (en) Facilitating the design of information technology solutions
CN113064620B (en) A method and device for processing system data
CN112671918B (en) Binary system-based distributed data downloading method, device, equipment and medium
CN114928862A (en) Method and system for reducing system overhead based on task unloading and service caching
JP7087585B2 (en) Information processing equipment, control methods, and programs
CN120434209B (en) DPDK multi-process-based message playback method, device, medium, and product
CN118210995A (en) Method and device for processing request queue, computer equipment and storage medium
CN117785352A (en) Rendering methods, devices, equipment and storage media

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 361000 Fujian Province Xiamen City Torch High-tech Industrial Development Zone Software Park Phase II Qianpu East Road 188, 19th Floor

Applicant after: Guotou Intelligent Information Technology Co.,Ltd.

Address before: Unit 102-402, No. 12, guanri Road, phase II, Xiamen Software Park, Fujian Province, 361000

Applicant before: XIAMEN MEIYA PICO INFORMATION Co.,Ltd.

Country or region before: China

CB02 Change of applicant information