Disclosure of Invention
The technical problems mentioned above are solved. An objective of the embodiments of the present application is to provide a method and an apparatus for dynamically adjusting a buffer size and a maximum request thread count, so as to solve the technical problems mentioned in the background section.
In a first aspect, the present invention provides a method for dynamically adjusting a buffer size and a maximum total number of request threads, including the steps of:
acquiring the size of a buffer zone of a preset request end and the total number of maximum request threads, detecting the maximum network bandwidth value from the request end to a target server, and acquiring the first throughput of a network;
in response to determining that the first throughput does not exceed the maximum network bandwidth value and that the current program memory has cache data exceeding a preset amount, or in response to determining that the first throughput has exceeded the maximum network bandwidth value, detecting a disk write rate of the request end and a CPU utilization rate of a corresponding kernel, in response to determining that the disk write rate exceeds a first threshold or the CPU utilization rate of the corresponding kernel exceeds a second threshold, reducing the total number of maximum request threads of the corresponding kernel, and in response to determining that the disk write rate still exceeds the first threshold, reducing the buffer size of the request end and the current program memory size;
detecting the CPU utilization rate of the corresponding kernel if the first throughput does not exceed the maximum network bandwidth value and the cache data exceeding the preset quantity does not exist in the current program memory, and increasing the maximum request thread total number of the corresponding kernel if the maximum request thread total number does not exceed a third threshold or the CPU utilization rate of the corresponding kernel does not exceed a second threshold;
acquiring a second throughput of the network again, and increasing the buffer area size of the request end and the current program memory size in response to determining that the second throughput does not exceed the maximum network bandwidth value;
and then obtaining a third throughput of the network, and ending the adjustment in response to determining that the third throughput has exceeded the maximum network bandwidth value.
Preferably, the method further comprises:
a third throughput of the network is obtained in response to determining that the disk write rate does not exceed the first threshold or that the second throughput exceeds the maximum network bandwidth value, and the adjusting is ended in response to determining that the third throughput has exceeded the maximum network bandwidth value.
Preferably, the method further comprises:
in response to determining that the third throughput does not exceed the maximum network bandwidth value, the maximum number of requesting threads is increased.
Preferably, the first throughput, the second throughput and the third throughput are all obtained by direct statistics on the network card of the request end.
Preferably, the current program memory size is the sum of a plurality of buffer sizes inside the current program memory size, and in response to no preset maximum value of the current program memory size exists, the maximum value of the current program memory size is determined through a dump mechanism.
In a second aspect, the present invention provides a device for dynamically adjusting a buffer size and a maximum total number of request threads, including:
the data acquisition module is configured to acquire the size of a buffer zone of a preset request end and the total number of maximum request threads, detect the maximum network bandwidth value from the request end to a target server, and acquire the first throughput of a network;
the first adjusting module is configured to detect the disk writing rate of the request end and the CPU utilization rate of the corresponding kernel in response to determining that the first throughput does not exceed the maximum network bandwidth value and that the current program memory has cache data exceeding the preset quantity, or to reduce the maximum request thread total number of the corresponding kernel in response to determining that the disk writing rate exceeds a first threshold or the CPU utilization rate of the corresponding kernel exceeds a second threshold;
the second adjusting module is configured to detect the CPU utilization rate of the corresponding kernel in response to the fact that the first throughput does not exceed the maximum network bandwidth value and the cache data exceeding the preset quantity does not exist in the current program memory, and increase the maximum request thread total number of the corresponding kernel in response to the fact that the maximum request thread total number does not exceed a third threshold or the CPU utilization rate of the corresponding kernel does not exceed a second threshold;
the third adjusting module is configured to acquire the second throughput of the network again, and adjust the buffer size of the request end and the current program memory size in response to determining that the second throughput does not exceed the maximum network bandwidth value;
an end adjustment module configured to then acquire a third throughput of the network, and in response to determining that the third throughput has exceeded the maximum network bandwidth value, end the adjustment.
In a third aspect, the present invention provides an electronic device comprising one or more processors; and storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method as described in any of the implementations of the first aspect.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as described in any of the implementations of the first aspect.
Compared with the prior art, the invention has the following beneficial effects:
(1) The method for dynamically adjusting the size of the buffer zone and the total number of the maximum request threads provided by the invention can maximize the utilization of local equipment resources and network bandwidth processing capacity by presetting a dynamic data adjustment process and detecting the network bandwidth processing capacity in real time, thereby improving the efficiency of remote data fixation.
(2) The method for dynamically adjusting the size of the buffer zone and the total number of the maximum request threads provided by the invention dynamically adjusts the size of the buffer zone and the total number of the maximum request threads according to the current equipment resource environment by dynamically analyzing the IO use condition of the current disk, the current program memory use rate and the throughput of the current network, and simultaneously combines the application of the actual combat process to the relevant experience value of the network transmission data, thereby improving the better effect of fixing the remote server data in the electronic data evidence obtaining industry.
(3) The method for dynamically adjusting the size of the buffer zone and the total number of the maximum request threads is not only suitable for remote mirror image fixed evidence collection, thereby improving the remote mirror image fixed efficiency, but also suitable for being applied to actual combat scenes such as read-write separation scenes, fragment transmission, data compression transmission and the like.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
FIG. 1 illustrates an exemplary device architecture 100 in which a method of dynamically adjusting buffer size and maximum number of request threads or a device for dynamically adjusting buffer size and maximum number of request threads of embodiments of the present application may be applied.
As shown in fig. 1, the apparatus architecture 100 may include a first terminal device 101, a second terminal device 102, a third terminal device 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the first terminal device 101, the second terminal device 102, the third terminal device 103, and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the first terminal device 101, the second terminal device 102, the third terminal device 103, to receive or send messages, etc. Various applications, such as a data processing class application, a file processing class application, and the like, may be installed on the terminal device one 101, the terminal device two 102, and the terminal device three 103.
The first terminal device 101, the second terminal device 102 and the third terminal device 103 may be hardware or software. When the first terminal device 101, the second terminal device 102, and the third terminal device 103 are hardware, they may be various electronic devices, including but not limited to smart phones, tablet computers, laptop computers, desktop computers, and the like. When the first terminal apparatus 101, the second terminal apparatus 102, and the third terminal apparatus 103 are software, they can be installed in the above-listed electronic apparatuses. Which may be implemented as multiple software or software modules (e.g., software or software modules for providing distributed services) or as a single software or software module. The present invention is not particularly limited herein.
The server 105 may be a server that provides various services, such as a background data processing server that processes files or data uploaded by the terminal device one 101, the terminal device two 102, and the terminal device three 103. The background data processing server can process the acquired file or data to generate a processing result.
It should be noted that, the method for dynamically adjusting the buffer size and the total number of the maximum request threads provided in the embodiments of the present application may be executed by the server 105, or may be executed by the first terminal device 101, the second terminal device 102, or the third terminal device 103, and correspondingly, the dynamic adjusting device for the buffer size and the total number of the maximum request threads may be set in the server 105, or may be set in the first terminal device 101, the second terminal device 102, or the third terminal device 103.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In the case where the processed data does not need to be acquired from a remote location, the above-described apparatus architecture may not include a network, but only a server or terminal device.
Fig. 2 shows a method for dynamically adjusting the buffer size and the total number of maximum request threads according to an embodiment of the present application, including the following steps:
s1, acquiring the size of a buffer zone and the total number of maximum request threads of a preset request end, detecting the maximum network bandwidth value from the request end to a target server, and acquiring the first throughput of a network.
In a specific embodiment, the first throughput, the second throughput and the third throughput are all obtained by direct statistics on the network card of the request end.
Specifically, taking a remote mirror image fixed data process as an example, specific steps of a dynamic adjustment method for the size of a buffer area and the total number of maximum request threads, which are provided by the embodiment of the application, are described, and the scheme is also suitable for being applied to actual combat scenes such as read-write separation scenes, fragment transmission, data compression transmission and the like.
Referring to fig. 3, when fixing data to remote mirror image, the buffer size and the maximum number of request threads are preset according to different programming languages, and the buffer size and the corresponding maximum number of request threads are preset according to different operating systems by combining actual combat experience. According to the equipment operated by the current request end, the maximum network bandwidth value from the request end to the target server is detected through basic socket network communication, and the first throughput of the current network is directly counted on the network card, and the first throughput is directly counted on the network card without counting on a kernel layer, so that the kernel pressure can be reduced, and a large number of IO copies can be reduced.
S2, in response to determining that the first throughput does not exceed the maximum network bandwidth value and that the current program memory has the cached data exceeding the preset quantity, or in response to determining that the first throughput does exceed the maximum network bandwidth value, detecting the disk writing rate of the request end and the CPU utilization rate of the corresponding kernel, in response to determining that the disk writing rate exceeds a first threshold or the CPU utilization rate of the corresponding kernel exceeds a second threshold, reducing the total number of the maximum request threads of the corresponding kernel, and in response to the disk writing rate still exceeds the first threshold, reducing the buffer area size of the request end and the current program memory size.
In particular, during the course of fixing the data,if the first throughput does not exceed the maximum network bandwidth value and the current program memory has more than the preset amount of cache data, the preset amount of cache data is 2 n The size of each buffer, and each buffer size suggestion is smaller than the system memory size. In one embodiment, if certain cache data still exists in the current program memory and the size of the certain cache data is larger than the size of 2 buffer areas, the current disk IO processing capability is detected again, where the disk IO processing capability is measured by a disk writing rate, if the disk writing rate exceeds a first threshold, for example, the disk writing rate is preset to be 90% of the maximum value according to practical experience, or the CPU utilization of the corresponding kernel exceeds 80%, the maximum request thread total number is reduced appropriately, that is, the maximum request thread total number is reduced, so that the disk blocking phenomenon can be reduced, no resource fragment requests a server side, the disk blocking phenomenon cannot be checked at a task manager, and the CPU occupation is very high. The step is a process of dynamically adjusting based on the IO processing capability of the disk and the CPU load capability, if the disk writing rate exceeds a first threshold value or the CPU resource occupation is particularly high, the total number of the maximum request threads is reduced in a certain way, for example, the reduced total number of the maximum request threads is the preset total number of the maximum request threads divided by 2, so that the processing capability of the current system is reduced, and the current system is monitored in real time and judged in real time. And further judging whether the disk writing rate still exceeds the first threshold value, and if the disk writing rate still exceeds the first threshold value, reducing the buffer area size of the request end and the current program memory size.
And S3, detecting the CPU utilization rate of the corresponding kernel in response to the fact that the first throughput does not exceed the maximum network bandwidth value and the cache data exceeding the preset quantity does not exist in the current program memory, and increasing the maximum request thread total number of the corresponding kernel in response to the fact that the maximum request thread total number does not exceed a third threshold or the CPU utilization rate of the corresponding kernel does not exceed a second threshold.
Specifically, if the first throughput does not exceed the maximum network bandwidth value and there is no buffered data exceeding the preset amount in the current program memory, that is, there is empty data in the current program memory, that is, the data size of the memory is smaller than the size of the 2 buffers, the data is stored in the current program memory according to the specified data structure, so as to determine whether the data in the current program memory is empty. Detecting the CPU utilization rate of the corresponding kernel again, and if the CPU utilization rate of the corresponding kernel does not exceed a second threshold and the total number of the maximum request threads does not exceed a third threshold, wherein the third threshold defaults to 8 times of the number of the kernels, properly increasing the total number of the maximum request threads on the basis of the preset total number of the maximum request threads, namely increasing the total number of the maximum request threads.
And S4, acquiring the second throughput of the network again, and increasing the buffer area size of the request end and the current program memory size in response to determining that the second throughput does not exceed the maximum network bandwidth value.
In a specific embodiment, the current program memory size is the sum of a plurality of buffer sizes inside the current program memory size, and in response to no preset maximum value of the current program memory size exists, the maximum value of the current program memory size is determined through a dump mechanism.
Specifically, after the process of increasing the total number of the maximum request threads is completed, the second throughput of the network is acquired again, whether the second throughput exceeds the maximum network bandwidth value is judged, if the second throughput does not exceed the maximum network bandwidth value, the buffer area size of the request end and the current program memory size are increased, the current running data processing total current program memory size is composed of a plurality of buffer areas, each buffer area is constructed by a designated data structure, and therefore, after the buffer area size is adjusted, the current program memory size is changed. The maximum value of the most reasonable current program memory size is obtained by specifying the maximum value of the current program memory size of the pre-judgment and combining a dump mechanism, and the program crash cannot be caused by adjusting the size of the buffer area. The dump mechanism is a condition that the current program memory size exceeds a certain threshold value and the process cannot normally run. Where the current program memory size can generally be predetermined to obtain its maximum value. The partial system generally presets a maximum value of the current program memory size, if the maximum value of the current program memory size is not preset, the maximum value of the current program memory size is prejudged through a dump mechanism, so that a most suitable maximum value of the current program memory size is selected, and the buffer area size and the current program memory size adjusting process in the step are limited according to the most suitable maximum value of the current program memory size. The dynamic adjustment of the buffer size and the maximum number of requesting threads is thus completed at this point.
And S5, acquiring third throughput of the network, and ending adjustment in response to determining that the third throughput exceeds the maximum network bandwidth value.
In a specific embodiment, the method further comprises:
a third throughput of the network is obtained in response to determining that the disk write rate does not exceed the first threshold or that the second throughput exceeds the maximum network bandwidth value, and the adjusting is ended in response to determining that the third throughput has exceeded the maximum network bandwidth value.
In a specific embodiment, the method further comprises:
in response to determining that the third throughput does not exceed the maximum network bandwidth value, the maximum number of requesting threads is increased.
Specifically, after the above dynamic adjustment process of the buffer size and the total number of the maximum request threads, if the disk write rate does not exceed the first threshold value or the second throughput exceeds the maximum network bandwidth value, three aspects of the current CPU core processing capability, the disk IO processing capability and the memory adjustment rationality need to be analyzed again to obtain the third throughput of the current network, and whether the third throughput exceeds the maximum network bandwidth value is judged, if yes, the adjustment is ended, otherwise, the step of increasing the total number of the maximum request threads in step S3 is repeated to step S5, and parameters are adjusted again to exceed the maximum network bandwidth value of the network, thereby improving the capability of remote mirror image manufacturing.
If the throughput of the current device in the network card has larger fluctuation with the maximum network bandwidth value, the steps S2 to S5 are repeated at regular time, so that the maximum throughput of the local device resource and the network bandwidth is adjusted in real time.
The above steps S1-S5 do not merely represent the order between steps, but rather are step notations.
The effect of the dynamic adjustment method of the buffer size and the total number of the maximum request threads in practical application is shown in table 1, fig. 4 and fig. 5, fig. 4 shows the effect of applying the dynamic adjustment method of the buffer size and the total number of the maximum request threads in the cloud evidence obtaining workstation DC-5900, and compared with the existing manual adjustment method of the buffer size and the total number of the maximum request threads, the dynamic adjustment method of the buffer size and the total number of the maximum request threads in the embodiment of the application is adopted to enable the flow of the whole network transmission whole input and output to be relatively stable (red frame part), the whole mirror image manufacturing efficiency speed is relatively high, and the maximum bandwidth utilization rate is not fully exceeded at present, and can be further optimized and adjusted to improve the maximum bandwidth utilization rate in continuous optimization.
TABLE 1
With further reference to fig. 6, as an implementation of the method shown in the foregoing figures, the present application provides an embodiment of a dynamic adjustment apparatus for a buffer size and a maximum request thread total number, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
The embodiment of the application provides a dynamic adjustment device for buffer size and maximum request thread total number, comprising:
the data acquisition module 1 is configured to acquire the size of a buffer zone of a preset request end and the total number of maximum request threads, detect the maximum network bandwidth value from the request end to a target server, and acquire the first throughput of a network;
the first adjusting module 2 is configured to detect the disk writing rate of the request end and the CPU utilization rate of the corresponding kernel in response to determining that the first throughput does not exceed the maximum network bandwidth value and that the current program memory has cache data exceeding a preset amount, or to reduce the maximum number of request threads of the corresponding kernel in response to determining that the disk writing rate exceeds a first threshold or the CPU utilization rate of the corresponding kernel exceeds a second threshold in response to determining that the disk writing rate still exceeds the first threshold and to reduce the buffer size of the request end and the current program memory size in response to determining that the disk writing rate still exceeds the first threshold;
the second adjusting module 3 is configured to detect the CPU utilization of the corresponding kernel in response to determining that the first throughput does not exceed the maximum network bandwidth value and that the current program memory does not have cache data exceeding the preset amount, and increase the maximum request thread total number of the corresponding kernel in response to determining that the maximum request thread total number does not exceed the third threshold or the CPU utilization of the corresponding kernel does not exceed the second threshold;
a third adjustment module 4, configured to acquire the second throughput of the network again, and in response to determining that the second throughput does not exceed the maximum network bandwidth value, increase the buffer size of the request end and the current program memory size;
an end adjustment module 5 configured to then acquire a third throughput of the network, and in response to determining that the third throughput has exceeded the maximum network bandwidth value, end the adjustment.
Referring now to fig. 7, there is illustrated a schematic diagram of a computer apparatus 700 suitable for use in implementing an electronic device (e.g., a server or terminal device as illustrated in fig. 1) of an embodiment of the present application. The electronic device shown in fig. 7 is only an example and should not impose any limitation on the functionality and scope of use of the embodiments of the present application.
As shown in fig. 7, the computer apparatus 700 includes a Central Processing Unit (CPU) 701 and a Graphics Processor (GPU) 702, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 703 or a program loaded from a storage section 709 into a Random Access Memory (RAM) 704. In the RAM 704, various programs and data required for the operation of the apparatus 700 are also stored. The CPU 701, the GPU702, the ROM 703, and the RAM 704 are connected to each other through a bus 705. An input/output (I/O) interface 706 is also connected to the bus 705.
The following components are connected to the I/O interface 706: an input section 707 including a keyboard, a mouse, and the like; an output portion 708 including a speaker, such as a Liquid Crystal Display (LCD), or the like; a storage section 709 including a hard disk or the like; and a communication section 710 including a network interface card such as a LAN card, a modem, and the like. The communication section 710 performs communication processing via a network such as the internet. The drives 711 may also be connected to the I/O interfaces 706 as needed. A removable medium 712 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on the drive 711, so that a computer program read out therefrom is installed into the storage section 709 as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such embodiments, the computer program may be downloaded and installed from a network via the communication portion 710, and/or installed from the removable media 712. The above-described functions defined in the method of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 701 and a Graphics Processor (GPU) 702.
It should be noted that the computer readable medium described in the present application may be a computer readable signal medium or a computer readable medium, or any combination of the two. The computer readable medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor apparatus, device, or means, or a combination of any of the foregoing. More specific examples of the computer-readable medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution apparatus, device, or apparatus. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may be any computer readable medium that is not a computer readable medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution apparatus, device, or apparatus. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or it may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based devices which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments described in the present application may be implemented by software, or may be implemented by hardware. The described modules may also be provided in a processor.
As another aspect, the present application also provides a computer-readable medium that may be contained in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring the size of a buffer zone of a preset request end and the total number of maximum request threads, detecting the maximum network bandwidth value from the request end to a target server, and acquiring the first throughput of a network; in response to determining that the first throughput does not exceed the maximum network bandwidth value and that the current program memory has cache data exceeding a preset amount, or in response to determining that the first throughput has exceeded the maximum network bandwidth value, detecting a disk write rate of the request end and a CPU utilization rate of a corresponding kernel, in response to determining that the disk write rate exceeds a first threshold or the CPU utilization rate of the corresponding kernel exceeds a second threshold, reducing the total number of maximum request threads of the corresponding kernel, and in response to determining that the disk write rate still exceeds the first threshold, reducing the buffer size of the request end and the current program memory size; detecting the CPU utilization rate of the corresponding kernel if the first throughput does not exceed the maximum network bandwidth value and the cache data exceeding the preset quantity does not exist in the current program memory, and increasing the maximum request thread total number of the corresponding kernel if the maximum request thread total number does not exceed a third threshold or the CPU utilization rate of the corresponding kernel does not exceed a second threshold; acquiring a second throughput of the network again, and increasing the buffer area size of the request end and the current program memory size in response to determining that the second throughput does not exceed the maximum network bandwidth value; and then obtaining a third throughput of the network, and ending the adjustment in response to determining that the third throughput has exceeded the maximum network bandwidth value.
The foregoing description is only of the preferred embodiments of the present application and is presented as a description of the principles of the technology being utilized. It will be appreciated by persons skilled in the art that the scope of the invention referred to in this application is not limited to the specific combinations of features described above, but it is intended to cover other embodiments in which any combination of features described above or equivalents thereof is possible without departing from the spirit of the invention. Such as the above-described features and technical features having similar functions (but not limited to) disclosed in the present application are replaced with each other.