Disclosure of Invention
The embodiment of the application provides a log landing method and device, a storage medium and electronic equipment, which are used for at least solving the problems of poor real-time performance and the like of the cache of application logs in the related technology.
According to an embodiment of the present application, there is provided a method for dropping logs, where N server applications are running on a server, N is an integer greater than 1, and the method is applied to the server, and includes:
Acquiring storage demand information of each of N server applications, wherein the storage demand information is used for indicating storage space demands required by the corresponding server application for caching generated application logs;
distributing a cache queue meeting the corresponding storage demand information for each of N server applications to obtain M cache queues, wherein each of the M cache queues is used for caching application logs generated by the corresponding server application in the N server applications, and M is an integer greater than or equal to N;
and extracting the application logs from the M cache queues to drop the application logs.
Optionally, the obtaining storage requirement information of each of the N server applications includes:
Collecting history queue use parameters of each server application in N server applications, wherein the history queue use parameters are used for indicating the use condition of the corresponding server application on a history cache queue, and the history cache queue is a cache queue distributed for the server application in a history stage;
And generating the storage demand information of the corresponding server application according to the history queue use parameters.
Optionally, the collecting historical queue usage parameters of each of the N server applications includes:
Collecting the historical queue utilization rate of each server application in N server applications, wherein the historical queue utilization rate is a first ratio of the historical queue use capacity to the historical queue total capacity;
Determining the historical queue utilization rate as the historical queue use parameter;
the generating the storage requirement information of the corresponding server application according to the historical queue use parameter includes:
Generating a desired total capacity of the queue according to a second ratio of the historical queue usage capacity to the target utilization threshold under the condition that the historical queue utilization is greater than the target utilization threshold, wherein the desired total capacity of the queue is greater than the second ratio, and the desired total capacity of the queue is the total capacity to be adjusted to the historical cache queue;
and determining the total capacity of the expected queue as the storage requirement information.
Optionally, the allocating a buffer queue for each of the N server applications to meet the corresponding storage requirement information to obtain M buffer queues includes one of the following:
When the storage requirement information of the ith server application in the N server applications is the total capacity of the ith expected queue, and the maximum capacity allowed to be allocated to a single cache queue is greater than or equal to the total capacity of the ith expected queue, allocating a cache queue with the capacity size being at least the total capacity of the ith expected queue for the ith server application;
The method comprises the steps of calculating a third ratio P of the total capacity of an ith expected queue and the maximum capacity of an ith expected queue when storage demand information of the ith server application in N server applications is the total capacity of the ith expected queue and the maximum capacity of the ith expected queue is allowed to be allocated for a single buffer queue to be smaller than the total capacity of the ith expected queue, allocating buffer queues with at least P capacity being the maximum capacity for the ith server application when P is an integer, calculating a fourth ratio Q of the total capacity of the ith expected queue and P+1 when P is a non-integer, and allocating buffer queues with the capacity of P+1 being greater than or equal to Q for the ith server application when P is an integer.
Optionally, after allocating a buffer queue for each of the N server applications to meet the corresponding storage requirement information, to obtain M buffer queues, the method further includes:
detecting the queue utilization rate of each of the M cache queues, wherein the queue utilization rate is a fifth ratio of the queue use capacity of the corresponding cache queue to the total capacity of the queue;
Recording each buffer queue as a history buffer queue, and recording the queue utilization rate of each buffer queue as a history queue use parameter of the corresponding history buffer queue.
Optionally, the extracting the application log from the M cache queues for landing includes:
R rounds of extraction of the application logs are executed on the M cache queues until all the application logs in the M cache queues are extracted, and an initial application log sequence is obtained;
and sequencing the application logs in the initial application log sequence according to the log generation time of the application log to obtain a target application log sequence.
Optionally, the extracting the application log of R rounds is performed on the M cache queues, including:
Executing extraction of the application log of the t th round in R rounds on M cache queues by the following steps:
Acquiring a target extraction sequence, wherein the target extraction sequence is the sequence of the extracted application logs of the M preset cache queues;
And sequentially extracting k application logs from each of the M cache queues according to the target extraction sequence, wherein k is an integer greater than or equal to 1.
Optionally, the sorting the application logs in the initial application log sequence according to the log generation time of the application log to obtain a target application log sequence includes:
Extracting time parameter information from header information of each application log in the initial application log sequence, wherein the time parameter information is used for indicating log generation time of the corresponding application log;
and sequencing the application logs in the initial application log sequence according to the time parameter information to obtain a target application log sequence.
Optionally, after allocating a buffer queue for each of the N server applications to meet the corresponding storage requirement information, to obtain M buffer queues, the method further includes:
Detecting the current storage state of a target cache queue corresponding to a target server application in N server applications under the condition that the target server application has a target application log to be stored;
And under the condition that the storage state is used for indicating that the target cache queue has no residual storage space, executing target storage operation on the target application log according to the target log grade of the target application log and the reference log grade of a plurality of reference application logs currently cached in the target cache queue, wherein the log grade is used for indicating the importance degree of the corresponding log.
Optionally, the executing the target storage operation on the target application log according to the target log level of the target application log and the reference log levels of the plurality of reference application logs currently cached in the target cache queue includes:
locating a candidate application log with the reference log level smaller than the target log level from a plurality of reference application logs;
And determining the target storage operation to cover the candidate application log by using the target application log.
Optionally, the locating, from the plurality of reference application logs, a candidate application log with the reference log level smaller than the target log level includes:
Sequentially acquiring an alternative application log from a plurality of reference application logs according to a log generation time sequence, and comparing the reference log grade and the target log grade of the alternative application log, wherein the earlier the log generation time is, the earlier the reference application log is acquired;
Determining the candidate application log as the candidate application log in the case that the reference log level of the candidate application log is smaller than the target log level;
And under the condition that the reference log grade of the alternative application log is greater than or equal to the target log grade, continuing to acquire the next alternative application log from a plurality of reference application logs according to the log generation time sequence.
According to another embodiment of the present application, there is further provided a log landing device, on which N server applications run, N being an integer greater than 1, the device being applied to the server, the device including:
The acquisition module is used for acquiring storage requirement information of each of the N server applications, wherein the storage requirement information is used for indicating storage space requirements required by the corresponding server application for caching the generated application log;
the allocation module is used for allocating a cache queue meeting the corresponding storage requirement information to each of N server applications to obtain M cache queues, wherein each of the M cache queues is used for caching application logs generated by the corresponding server application in the N server applications, and M is an integer greater than or equal to N;
And the extracting module is used for extracting the application logs from the M cache queues to drop the application logs.
According to a further embodiment of the present application, there is also provided a computer program product comprising a computer program for executing the steps of any of the method embodiments described above by a processor.
According to a further embodiment of the application, there is also provided a computer readable storage medium having stored therein a computer program, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
According to a further embodiment of the application there is also provided an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
In the embodiment of the application, a method for dropping logs is provided, N server applications are operated on a server, N is an integer larger than 1, the server firstly acquires storage demand information of each server application in the N server applications, the storage demand information can indicate storage space requirements required by the corresponding server applications for caching the generated application logs, then a cache queue meeting the corresponding storage demand information is allocated to each server application in the N server applications, M cache queues are obtained, each cache queue in the M cache queues can cache the application logs generated by the corresponding server application in the N server applications, M is an integer larger than or equal to N, finally, the application logs are extracted from the M cache queues, and the application logs are generated simultaneously even though the plurality of server applications are required to be cached simultaneously, so that the application logs generated by different server applications can be respectively cached into the cache queues allocated to different server applications, the problem of resource preemption is avoided, and the problem that the existing server applications need to wait for the cache the application logs in the server application queues is solved, and the problem of poor real-time storage performance of the application logs is guaranteed when the existing server applications wait for the application queues are cached in the server application queues is solved. By adopting the technical scheme, the problems of poor real-time performance of the application log cache and the like in the related technology are solved, and the technical effect of improving the real-time performance of the application log cache is realized.
Detailed Description
Embodiments of the present application will be described in detail below with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be executed in a server apparatus or similar computing device. Taking the example of running on a server device, fig. 1 is a block diagram of a hardware structure of a computer device of a log-drop method according to an embodiment of the present application. As shown in fig. 1, the server device may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a microprocessor MCU, a programmable logic device FPGA, or the like processing means) and a memory 104 for storing data, wherein the server device may further include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those of ordinary skill in the art that the architecture shown in fig. 1 is merely illustrative and is not intended to limit the architecture of the server apparatus described above. For example, the server device may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a method for dropping logs in an embodiment of the present application, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, implement the above-mentioned method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located with respect to the processor 102, which may be connected to the server device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of a server device. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as a NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
The terms involved in the embodiments of the present application are explained as follows:
The annular queue is a circular data structure connected end to end, and allows the idle front space to be continuously utilized under the condition that the queue is full, so that the circular use of the storage space is realized;
uart, an interface for serial data transmission, responsible for asynchronous transmission and reception of data.
Log-representing log is a record generated by a system, an application program or equipment and contains information such as operation process, error information, user behavior and the like.
Before starting to describe the optional embodiments of the present application, in order to better understand the inventive concept and the inventive aspects of the present application, the related art is first described:
In the related art, a common application log management method is to use a shared ring queue to store and process application logs generated from a plurality of server applications, the ring queue has a storage space with a fixed capacity, and when the storage space of the ring queue becomes full, a new application log will cover the oldest application log, and the log storage space is recycled. Because of the storage space resource sharing of the ring queue, each server application needs to check whether the queue is occupied by other server applications before attempting to add the application log into the queue, and if the other server applications are currently adding the application log data into the queue, the operation needs to be continued after waiting for the application log of the other server applications to be stored and the queue storage resource to be released. Obviously, under the high concurrency condition, when a plurality of server applications attempt to add application logs into the annular queue at the same time, the problem of resource competition can occur, and further delay of application log records is caused. This delay may lead to a delay in storing the application log data, and the real-time performance of storing the application log is poor, so that the recording behavior of the application log is inconsistent with the actual behavior of the system. Especially when the system is abnormal, application log data generated by different server applications at the same time can not be accurately analyzed, and the system state can not be accurately reflected. Aiming at the problems in the related art, the application provides a log drop method, which utilizes a plurality of cache queues to store application logs of different servers and sequentially extracts application log data in each cache queue, so that the running state of the current system can be checked only by sequencing the application logs according to the sequence of time stamps, and the application aims to avoid the delay of the application log records, improve the consistency of the application log data and meet the requirement of application log management in a high concurrency environment.
In this embodiment, a log landing method is provided, fig. 2 is a flowchart of a log landing method according to an embodiment of the present application, as shown in fig. 2, N server applications are running on a server, N is an integer greater than 1, and the method is applied to the server, where the flowchart includes the following steps:
step S12, storage requirement information of each of N server applications is obtained, wherein the storage requirement information is used for indicating storage space requirements required by the corresponding server application for caching the generated application log;
step S14, allocating a buffer queue meeting the corresponding storage requirement information to each of N server applications to obtain M buffer queues, wherein each buffer queue in the M buffer queues is used for buffering application logs generated by the corresponding server application in the N server applications, and M is an integer greater than or equal to N;
and S16, extracting the application logs from the M cache queues to drop the application logs.
Alternatively, in the present embodiment, the storage requirement information of the server applications may be obtained, but not limited to, by analyzing the historical operating data of the N server applications, such as the storage capacity of each server application history generation log, to predict the storage requirement of the corresponding server application for future generation log. In this way, the cache capacity of each server application can be dynamically adjusted, thereby more accurately allocating cache resources.
Alternatively, in this embodiment, the buffer queue may be, but is not limited to, any queue that satisfies a first-in-first-out storage manner, including a ring queue, a linked list queue, and the like. The number of server applications is not necessarily the same as the number of corresponding cache queues, and one server application may allocate one cache queue, or may allocate a plurality of cache queues.
Optionally, in this embodiment, a buffer queue meeting the corresponding storage requirement information is allocated to each of the N server applications, so as to obtain M buffer queues, where the allocation manner may, but is not limited to, include the following manners:
1) According to the storage requirement of the server application, the server application with large storage requirement is distributed with larger cache queues, the server application with small storage requirement is distributed with smaller cache queues, fig. 3 is a schematic diagram of the cache queues for indifferently storing all levels of application logs, as shown in fig. 3, the cache queue of the log generated by the server application 1 is a queue 1, the cache queue of the log generated by the server application 2 is a queue 2 and a queue 3, a cache queue group corresponding to the server application 2 is formed, and the cache queue group indifferently stores all levels of application logs, so that the cache resource is reasonably utilized and the overflow and the loss of the application log data can be prevented;
2) Under the condition that the allocated cache resources meet the storage requirements of the corresponding server applications, dividing the number of cache queues according to the number of levels of the application logs, for example, in N server applications, the storage requirement capacity of the ith server application is D, the levels of the application logs are divided into m levels, and at least m cache queues with the total capacity of at least D can be allocated to the ith server application to form a cache queue group corresponding to the ith server application. FIG. 4 is a schematic diagram of a cache queue storing logs according to application log levels according to an embodiment of the present application, as shown in FIG. 4, assuming that there are 3 application log levels, i: info, e: error, f: fatal, respectively, where priority level info < error < fatal. The buffer queue group corresponding to the server application 3 comprises a queue 4, a queue 5 and a queue 6, wherein the queue 4 is set to store the application log with the log grade of 'f', the queue 5 is set to store the application log with the log grade of 'e', and the queue 6 is set to store the application log with the log grade of 'i'. In this way, application logs with different application log grades can be stored by using different cache queues, after the storage space of the application log (grade is "f") stored in the cache queue 4 is full, the application log with the grade of "i" in the cache queue 6 can be directly covered by the application log with the grade of "f" without distinction, and likewise, after the storage space of the cache queue 5 is full, the application log in the cache queue 6 with the lower grade of the application log can be selected to be covered. The method and the device avoid the waste of resources and time caused by comparison of the grades of the application logs, improve the storage efficiency of the application logs and save the resource consumption of the application log storage.
Optionally, in this embodiment, the manner of extracting the application log from the M cache queues to drop the application log may, but is not limited to, setting a timing task, extracting all application logs from the cache queues in a preset order in a preset time interval, such as every several minutes or several hours, to drop the application log, performing an application log extraction operation when the application log data in the cache queues reach a certain number or trigger a specific condition, and preferentially extracting the application log data of the high-priority application from the cache queues according to the priority of the application log.
As an optional solution, the obtaining storage requirement information of each of the N server applications includes:
S21, collecting historical queue use parameters of each server application in N server applications, wherein the historical queue use parameters are used for indicating the use condition of the corresponding server application on a historical cache queue, and the historical cache queue is a cache queue distributed for the server application in a historical stage;
s22, generating the storage demand information of the corresponding server application according to the history queue use parameters.
Alternatively, in this embodiment, the storage requirement information may be, but is not limited to, any information that may reflect the storage space requirement of the server application, including the utilization of storage resources by a single server application, the length of time that the application log data needs to be kept in the cache, the priority of the server application, the speed and frequency of application log generation, and so on.
As an alternative, the collecting historical queue usage parameters of each of the N server applications includes:
S31, collecting the historical queue utilization rate of each server application in N server applications, wherein the historical queue utilization rate is a first ratio of the historical queue use capacity to the historical queue total capacity;
s32, determining the historical queue utilization rate as the historical queue utilization parameter;
S33, generating the storage requirement information of the corresponding server application according to the history queue use parameter, including:
S34, generating a desired total capacity of the queue according to a second ratio of the historical queue usage capacity to the target utilization rate threshold when the historical queue utilization rate is greater than the target utilization rate threshold, wherein the desired total capacity of the queue is greater than the second ratio, and the desired total capacity of the queue is the total capacity of the historical cache queue to be adjusted;
and S35, determining the total capacity of the expected queue as the storage requirement information.
Alternatively, in this embodiment, the historical queue usage parameter used to measure the historical queue usage may be, but is not limited to, average utilization of the queue over a period of time, number of overflows of the queue, and so on. For example, when the average utilization rate of a buffer queue exceeds a certain threshold value in a predetermined historical time period, it can be stated that the buffer queue has an overflow risk, and the buffer queue of the corresponding server application needs to be adjusted to be larger in capacity to buffer the application log.
Optionally, in this embodiment, the historical queue usage is measured using a historical queue usage, where the historical queue usage is determined according to a ratio of a historical queue usage capacity to a historical queue total capacity. The expected total capacity of each buffer queue is different according to different use conditions of the historical queue, wherein the determination method is not limited by presetting a target utilization threshold, when the historical queue utilization is smaller than or equal to the target utilization threshold, the current queue capacity of the queue is indicated to be used by a server application which is possibly enough to correspond to the current queue capacity of the queue, and then capacity adjustment is not needed, the expected total capacity of the buffer queue is the historical total capacity, when the historical queue utilization is larger than the target utilization threshold, the situation that overflow risk exists in the queue is indicated, the expected total capacity of the buffer queue is required to be adjusted to be larger than the historical total capacity of the buffer queue, and the minimum value of the adjusted expected total capacity of the buffer queue can be the ratio of the historical total capacity of the buffer queue to the target utilization threshold.
As an optional solution, the allocating a buffer queue for each of the N server applications to meet the corresponding storage requirement information to obtain M buffer queues includes one of the following:
s41, in the case that the storage requirement information of the ith server application in the N server applications is the total capacity of the ith expected queue, and the maximum capacity allowed to be allocated for a single cache queue is greater than or equal to the total capacity of the ith expected queue, allocating a cache queue with the capacity size of at least the total capacity of the ith expected queue for the ith server application;
Optionally, in this embodiment, for example, the total capacity of the i th expected queue is 20G, and the maximum capacity is 40G, at least 20G of the buffer queue is allocated.
S42, calculating a third ratio P of the total capacity of the i th expected queue and the maximum capacity when the storage demand information of the i th server application in the N server applications is the total capacity of the i th expected queue and the maximum capacity allowed to be allocated for a single buffer queue is smaller than the total capacity of the i th expected queue, allocating at least P buffer queues with the capacity of the maximum capacity for the i th server application when P is an integer, calculating a fourth ratio Q of the total capacity of the i th expected queue and P+1 when P is a non-integer, and allocating buffer queues with the capacity of P+1 being larger than or equal to Q for the i th server application when P is a positive number.
Optionally, in this embodiment, for example, the total capacity of the ith expected queue is 120G, the maximum capacity is 40G, and the third ratio P is 3, and at least 3 cache queues with the capacity size being the maximum capacity are used. For another example, if the total capacity of the i th expected queue is 90G, the maximum capacity is 40G, and the third ratio P is 2.25, at least 3 buffer queues with a capacity size equal to the maximum capacity are allocated, or at least 3 buffer queues with a capacity size equal to 30G are allocated.
As an optional solution, after allocating a buffer queue that satisfies the corresponding storage requirement information to each of the N server applications, to obtain M buffer queues, the method further includes:
S51, detecting the queue utilization rate of each of M cache queues, wherein the queue utilization rate is a fifth ratio of the queue utilization capacity of the corresponding cache queue to the total capacity of the queue;
S52, recording each buffer queue as a history buffer queue, and recording the queue utilization rate of each buffer queue as a history queue use parameter of the corresponding history buffer queue.
Alternatively, in this embodiment, the utilization may also, but not limited to, apply a sixth ratio of the total usage capacity and the total capacity of the corresponding cache queue group for the single server. Recording all cache queue groups of each server application as a history cache queue group, and recording the utilization rate of each server application as a history queue use parameter of the corresponding history cache queue group.
As an optional solution, the extracting the application log from the M cache queues for landing includes:
S61, executing R rounds of extraction of the application logs on the M cache queues until all the application logs in the M cache queues are extracted, and obtaining an initial application log sequence;
s62, sorting the application logs in the initial application log sequence according to the log generation time of the application log to obtain a target application log sequence.
Alternatively, in this embodiment, the method of extracting the application logs in the M cache queues may, but is not limited to, adopt a first method of sequentially performing an extraction operation on the M cache queues, and extracting a certain amount of application logs each time, and a second method of sequentially performing extraction according to the order from the highest level to the lowest level of the application logs by setting an extraction round according to the number of levels of the application logs in the case of allocating the cache queues according to the levels of the application logs. For example, there are 3 application log levels, i: info, e: error, f: fatal, where priority level info < error < fatal, i.e. there are 3 corresponding levels of cache queues, and 3 rounds of extraction operations are sequentially performed, 1 round of extraction is sequentially performed on all the highest level cache queues, i.e. fatal levels, 2 round of extraction is sequentially performed on all the next highest level cache queues, i.e. error levels, and 3 round of extraction is sequentially performed on all the lowest level cache queues, i.e. info levels. By means of the second mode, the application log with the highest priority can be ensured to be processed and landed in time, and quick positioning and diagnosis during system faults are facilitated.
As an alternative, the extracting the application log of R rounds is performed on the M cache queues, including:
s71, executing extraction of the application log of the t th round in R rounds on M cache queues by the following steps:
S72, acquiring a target extraction sequence, wherein the target extraction sequence is the sequence of the extracted application logs of the M cache queues, which is preset;
s73, sequentially extracting k application logs from each of the M cache queues according to the target extraction sequence, wherein k is an integer greater than or equal to 1.
Alternatively, in this embodiment, the extraction of the application log may, but is not limited to, sequentially extracting the application log records from the M cache queues, first obtaining a preset target extraction sequence, then sequentially extracting k application log records from each cache queue according to the target extraction sequence, and performing the cycle until all application log records are extracted from all the cache queues. In the extraction process, if detecting that the application log records in a certain cache queue are smaller than k, taking out all the application log records in the cache queue, and if detecting that the certain cache queue is empty, the subsequent extraction operation skips the empty queue, and only continues to extract the rest non-empty queues until all the application log records of all the non-empty queues are completely extracted.
As an optional solution, the sorting the application logs in the initial application log sequence according to the application log generation time of the application log to obtain a target application log sequence includes:
S81, extracting time parameter information from the header information of each application log in the initial application log sequence, wherein the time parameter information is used for indicating the log generation time of the corresponding application log;
s82, sorting the application logs in the initial application log sequence according to the time parameter information to obtain a target application log sequence.
Optionally, in this embodiment, fig. 5 is a schematic diagram of an initial application log sequence according to an embodiment of the present application, and as shown in fig. 5, all application log records in all current cache queues are obtained in the above manner, where each application log includes header information and data information, and the header information includes application log class information and timestamp information (equivalent to time parameter information).
Optionally, in this embodiment, fig. 6 is a schematic diagram of a target application log sequence according to an embodiment of the present application, as shown in fig. 6, after sequentially analyzing header information and acquiring timestamp information in each application log, sorting the acquired full application log records according to time sequence, and determining the new sequence after sorting as the target application log sequence.
As an optional solution, after allocating a buffer queue for each of the N server applications to meet the corresponding storage requirement information, to obtain M buffer queues, the method further includes:
S91, detecting the current storage state of a target cache queue corresponding to a target server application under the condition that the target server application in N server applications has a target application log to be stored;
S92, under the condition that the storage state is used for indicating that the target cache queue has no residual storage space, executing target storage operation on the target application log according to the target log grade of the target application log and the reference log grade of a plurality of reference application logs currently cached in the target cache queue, wherein the log grade is used for indicating the importance degree of the corresponding log.
Optionally, in this embodiment, in the process of storing the target application log, when detecting that the cache queue group of the target server application has a storage space, all the application logs of the level may be added to the cache queue group, and when detecting that the cache queue group is full, the application log of the lower level in the cache queue is first covered, the application log of the higher level is reserved, and the application log of the higher level is maximally reserved.
As an alternative, the performing, according to the target log level of the target application log and the reference log levels of the plurality of reference application logs currently cached in the target cache queue, a target storage operation on the target application log includes:
S101, locating one candidate application log with the reference log grade smaller than the target log grade from a plurality of reference application logs;
s102, determining the target storage operation to cover the candidate application log by using the target application log.
Optionally, in this embodiment, fig. 7 is a schematic diagram of a buffer queue status for indifferently storing all level application logs according to an embodiment of the present application, where as shown in fig. 7, the added application log is continuously added in the case that the buffer queue is full on the left side. The operation flow of storing the newly added application log into the full queue is as follows, firstly, determining the application log at the head of the cache queue as a reference application log, judging whether the log grade of the reference application log is larger than or equal to the newly added log grade, if the log grade of the reference application log is larger than or equal to the log grade to be stored into the queue, determining the next head information in the cache queue as the reference application log, analyzing and comparing the reference application log with the newly added application log grade, sequentially determining the reference application log according to the storage sequence of the cache queue until the reference application log grade is smaller than the newly added application log grade, and covering the reference application log by using the newly added application log. As an example on the left side of fig. 7, when the current newly added application log level is "e", the reference application log level is "i", and the reference application log level is smaller than the newly added application log level "e", the newly added application log is inserted into the queue, and the reference application log is covered. And inserting all the newly added application logs to be processed into a cache queue according to the application log level according to the operation flow, and finally completing the enqueuing operation of the newly added application logs, as shown on the right side of fig. 7.
Optionally, in this embodiment, fig. 8 is a schematic diagram of a buffer queue status of a log stored according to an application log level according to an embodiment of the present application, and as shown in fig. 8, if the buffer queue group is not full, the newly added application log is continuously added. The operation flow of storing the newly added application log into the queue is as follows, firstly detecting the newly added application log grade as 'f', finding the queue 4 with the cached application log grade as 'f', directly inserting the newly added application log into the queue under the condition that the queue is not full, inserting the newly added application log into the queue with the least phase difference between the non-empty cached application log grade and the newly added application log grade if the cached queue corresponding to the newly added application log grade is full, for example, when the newly added application log grade is detected as '16 f log', namely, finding the queue 4 with the cached application log grade as 'f', detecting that the queue 4 is full, judging that the queue with the least phase difference between the current non-empty cached application log grade and the newly added application log grade as 'f' is the queue 5, and inserting the newly added application log into the queue 5. And inserting all the newly added application logs to be processed into a cache queue according to the application log level according to the operation flow, and finally completing the enqueuing operation of the newly added application logs, as shown on the right side of fig. 8.
As an alternative, the locating, from the plurality of reference application logs, a candidate application log having the reference log level smaller than the target log level includes:
s111, sequentially acquiring an alternative application log from a plurality of reference application logs according to a log generation time sequence, and comparing the reference log grade and the target log grade of the alternative application log, wherein the earlier the log generation time is, the earlier the reference application log is acquired;
s112, determining the candidate application log as the candidate application log under the condition that the reference log level of the candidate application log is smaller than the target log level;
S113, when the reference log grade of the alternative application log is greater than or equal to the target log grade, continuing to acquire the next alternative application log from a plurality of reference application logs according to the log generation time sequence.
Optionally, in this embodiment, fig. 9 is a schematic diagram of a flow of a server application log landing, as shown in fig. 9, where the system allocates an independent cache queue for each server application, and sets a size of a corresponding cache queue according to an actual running state of the application, so that the server application can store an application log into the queue according to a situation of actually generating the application log, then sequentially takes out the application log of each server application from each cache queue in a multi-round manner to obtain an initial application log sequence, then analyzes timestamp information contained in application log header information, sorts the application logs according to a chronological order of time to obtain a target application log sequence, and meanwhile can filter according to an application log level, store relevant data into a storage unit in the system, so as to help a user to more accurately know a service condition of each application in the system, and further optimize a capacity size of the cache queue. The method and the system solve the problems of thread competition, thread priority reversal, log behavior and inconsistent server application behavior among server applications.
The method for dropping logs provided by the application relates to interaction among a plurality of objects, including host (host/server), APP (server application), ring queue (cache queue), consumer (thread) and sender (uart), and fig. 10 is a schematic diagram of a log dropping interaction flow of server application according to an embodiment of the application, as shown in fig. 10, the host creates a plurality of APP server applications according to a service scene, and creates tasks of collecting logs. And then the host computer performs system initialization, including the parameters of uart initialization, baud rate configuration, verification mode, bit stopping and the like. And then each server application APP applies for and initializes a dedicated annular queue, wherein the size of the annular queue can be flexibly configured according to the service. Then each server application first judges whether the annular queue is full or not, and simultaneously checks whether the residual capacity of the annular queue meets the capacity of the added application log at the time. And if the requirements are met, the generated application logs are all written into the tail part of the self annular queue. If the ring queue is full, the newly added log content (application log) is inserted according to the application log level. The application log, when written to the ring queue, needs to record log level information (i: info/e: error/f: fatal) and time stamp information (time parameter information) of the enqueue. When executing the task of collecting the log, firstly judging whether the queue content is empty or not in sequence, and if not, sequentially taking out the data content in the queue from the head of each queue. And sequencing the collected total logs according to time sequence by analyzing the time stamp information in the logs. And finally, inputting the total log information (application log) to uart, and outputting the log to a host side through uart, so that a user can conveniently check the running condition of the current system on line. And then filtering the whole log according to the log level, and storing the whole log into a storage unit in the system to help a user to know the service condition of each application in the system more accurately. It is noted that fig. 10 only shows one server application and corresponding ring queue, but in practice multiple applications and multiple ring queues will be created.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiment also provides a log drop device, which is used for implementing the above embodiment and the preferred implementation, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 11 is a block diagram of a log drop device according to an embodiment of the present application, where, as shown in fig. 11, N server applications run on a server, where N is an integer greater than 1, and the device is applied to the server, and includes:
An obtaining module 1102, configured to obtain storage requirement information of each of N server applications, where the storage requirement information is used to indicate a storage space requirement required by the corresponding server application to cache the generated application log;
An allocation module 1104, configured to allocate a buffer queue meeting the corresponding storage requirement information to each of N server applications, to obtain M buffer queues, where each buffer queue in the M buffer queues is configured to buffer an application log generated by a corresponding server application in the N server applications, and M is an integer greater than or equal to N;
and the extracting module 1106 is configured to extract the application log from the M cache queues for landing.
In one exemplary embodiment, the acquisition module includes:
The system comprises an acquisition unit, a storage unit and a storage unit, wherein the acquisition unit is used for acquiring historical queue use parameters of each server application in N server applications, wherein the historical queue use parameters are used for indicating the use condition of the corresponding server application on a historical cache queue, and the historical cache queue is a cache queue distributed for the server application in a historical stage;
and the first generation unit is used for generating the storage demand information of the corresponding server application according to the history queue use parameters.
In an exemplary embodiment, the acquisition unit includes:
Collecting the historical queue utilization rate of each server application in N server applications, wherein the historical queue utilization rate is a first ratio of the historical queue use capacity to the historical queue total capacity;
Determining the historical queue utilization rate as the historical queue use parameter;
the generating the storage requirement information of the corresponding server application according to the historical queue use parameter includes:
Generating a desired total capacity of the queue according to a second ratio of the historical queue usage capacity to the target utilization threshold under the condition that the historical queue utilization is greater than the target utilization threshold, wherein the desired total capacity of the queue is greater than the second ratio, and the desired total capacity of the queue is the total capacity to be adjusted to the historical cache queue;
and determining the total capacity of the expected queue as the storage requirement information.
In one exemplary embodiment, the allocation module comprises one of:
an allocation unit, configured to allocate, in a case where the storage requirement information of an ith server application among N server applications is an ith expected queue total capacity, and a maximum capacity allowed to be allocated to a single cache queue by the server is greater than or equal to the ith expected queue total capacity, a cache queue whose capacity size is at least the ith expected queue total capacity to the ith server application;
The computing unit is configured to compute a third ratio P of the total capacity of the i th expected queue and the maximum capacity of the i th expected queue when the storage requirement information of the i th server application is the total capacity of the i th expected queue and the maximum capacity of the i th expected queue allowed to be allocated to a single buffer queue is smaller than the total capacity of the i th expected queue, allocate at least P buffer queues with the capacity of the maximum capacity to the i th server application when P is an integer, compute a fourth ratio Q of the total capacity of the i th expected queue and p+1 when P is a non-integer, and allocate buffer queues with the capacity of p+1 greater than or equal to Q to the i th server application when P is a non-integer, where P and Q are positive numbers.
In an exemplary embodiment, the apparatus further comprises:
The first detection module is configured to, after allocating a buffer queue meeting the corresponding storage requirement information to each of the N server applications and obtaining M buffer queues, detect a queue utilization rate of each of the M buffer queues, where the queue utilization rate is a fifth ratio of a queue usage capacity of the corresponding buffer queue to a total capacity of the queue;
the recording module is used for recording each buffer queue as a history buffer queue and recording the queue utilization rate of each buffer queue as a history queue use parameter of the corresponding history buffer queue.
In one exemplary embodiment, the extraction module includes:
The extraction unit is used for executing R rounds of extraction of the application logs on the M cache queues until the application logs in the M cache queues are extracted, so as to obtain an initial application log sequence;
And the second generating unit is used for sequencing the application logs in the initial application log sequence according to the log generation time of the application log to obtain a target application log sequence.
In an exemplary embodiment, the extraction unit includes:
Executing extraction of the application log of the t th round in R rounds on M cache queues by the following steps:
Acquiring a target extraction sequence, wherein the target extraction sequence is the sequence of the extracted application logs of the M preset cache queues;
And sequentially extracting k application logs from each of the M cache queues according to the target extraction sequence, wherein k is an integer greater than or equal to 1.
In an exemplary embodiment, the second generating unit includes:
Extracting time parameter information from header information of each application log in the initial application log sequence, wherein the time parameter information is used for indicating log generation time of the corresponding application log;
and sequencing the application logs in the initial application log sequence according to the time parameter information to obtain a target application log sequence.
In one exemplary embodiment, the apparatus further comprises:
The second detection module is configured to, after allocating a buffer queue for each of the N server applications to meet the corresponding storage requirement information, obtain M buffer queues, and detect a current storage state of a target buffer queue corresponding to a target server application when a target application log exists to be stored in the target server application of the N server applications;
And the execution module is used for executing target storage operation on the target application log according to the target log grade of the target application log and the reference log grade of a plurality of reference application logs currently cached in the target cache queue under the condition that the storage state is used for indicating that the target cache queue does not have residual storage space, wherein the log grade is used for indicating the importance degree of the corresponding log.
In one exemplary embodiment, the execution module includes:
the positioning unit is used for positioning one candidate application log with the reference log grade smaller than the target log grade from a plurality of reference application logs;
And the coverage unit is used for determining the target storage operation to cover the candidate application log by using the target application log.
In an exemplary embodiment, the positioning unit includes:
Sequentially acquiring an alternative application log from a plurality of reference application logs according to a log generation time sequence, and comparing the reference log grade and the target log grade of the alternative application log, wherein the earlier the log generation time is, the earlier the reference application log is acquired;
Determining the candidate application log as the candidate application log in the case that the reference log level of the candidate application log is smaller than the target log level;
And under the condition that the reference log grade of the alternative application log is greater than or equal to the target log grade, continuing to acquire the next alternative application log from a plurality of reference application logs according to the log generation time sequence.
It should be noted that each of the above modules may be implemented by software or hardware, and the latter may be implemented by, but not limited to, the above modules all being located in the same processor, or each of the above modules being located in different processors in any combination.
Embodiments of the present application also provide a computer program product comprising a computer program which when executed by a processor realizes the steps of the method described in the various embodiments of the present application, and a non-volatile computer readable storage medium which stores a computer program which when executed by a processor realizes the steps of the method described in the various embodiments of the present application.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
In an exemplary embodiment, the computer readable storage medium may include, but is not limited to, a U disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, etc. various media in which a computer program may be stored.
An embodiment of the application further provides an electronic device, fig. 12 being a schematic diagram of an electronic device according to an embodiment of the application, as shown in fig. 12, the electronic device comprising a memory in which a computer program is stored and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
In an exemplary embodiment, the electronic device may further include a transmission device connected to the processor, and an input/output device connected to the processor.
Specific examples in this embodiment may refer to the examples described in the foregoing embodiments and the exemplary implementation, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the application described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may be implemented in program code executable by computing devices, so that they may be stored in a storage device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps of them may be fabricated into a single integrated circuit module. Thus, the present application is not limited to any specific combination of hardware and software.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principle of the present application should be included in the protection scope of the present application.