[go: up one dir, main page]

CN119808041A - Application support system - Google Patents

Application support system Download PDF

Info

Publication number
CN119808041A
CN119808041A CN202411860628.9A CN202411860628A CN119808041A CN 119808041 A CN119808041 A CN 119808041A CN 202411860628 A CN202411860628 A CN 202411860628A CN 119808041 A CN119808041 A CN 119808041A
Authority
CN
China
Prior art keywords
data
user
recording
application layer
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411860628.9A
Other languages
Chinese (zh)
Inventor
王世骏
孙晓波
张雨琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central United Beijing Certification Center Co ltd
Original Assignee
Central United Beijing Certification Center Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central United Beijing Certification Center Co ltd filed Critical Central United Beijing Certification Center Co ltd
Priority to CN202411860628.9A priority Critical patent/CN119808041A/en
Publication of CN119808041A publication Critical patent/CN119808041A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明实施例涉及一种应用支撑系统,包括:应用层,在多用户环境下,接收来自前端或用户的请求,通过大数据处理后,确定用户对应的应用层行为;系统公共服务模块,提供至少一种公共组件和服务,包括权限管理、日志审计、数据访问层、任务调度、消息队列;权限管理包括定义不同的角色,并为每个角色分配特定的权限,验证用户的身份,根据用户的身份来决定用户可访问资源或可执行操作,记录用户权限的变更历史;日志审计包括记录应用程序的操作日志,记录系统运行时发生的错误和异常信息,记录系统的性能指标;总线服务网关接口,用于通过多路复用,与多个系统进行交互,且在交互时,发送或接收数据时的多个数据流并行在一个TCP连接上。

The embodiment of the present invention relates to an application support system, comprising: an application layer, which receives requests from a front end or a user in a multi-user environment, and determines the application layer behavior corresponding to the user after big data processing; a system public service module, which provides at least one public component and service, including authority management, log auditing, data access layer, task scheduling, and message queue; authority management includes defining different roles, assigning specific authority to each role, verifying the identity of the user, determining the user's accessible resources or executable operations based on the user's identity, and recording the change history of the user's authority; log auditing includes recording the operation log of the application, recording the errors and abnormal information occurring during system operation, and recording the performance indicators of the system; a bus service gateway interface, which is used to interact with multiple systems through multiplexing, and when interacting, multiple data streams when sending or receiving data are parallel on a TCP connection.

Description

Application support system
Technical Field
The invention relates to the technical field of computers, in particular to an application support system.
Background
With the development of information technology, enterprise-level applications are increasing, and traditional single applications cannot meet the requirements of high concurrency and high availability. Although the existing application support system solves the problems to a certain extent, the existing application support system still has the defects in the aspects of flexibility, expandability and the like. Therefore, it is important to have an application support system that can accommodate rapidly changing business requirements and that is easily scalable.
Disclosure of Invention
The invention aims at overcoming the defects of the prior art and provides an application support system for solving the problems in the prior art.
To achieve the above object, the present invention provides an application support system comprising:
The system comprises an application layer, a service logic layer, a response data constructing module and a response data processing module, wherein the application layer receives a request from a front end or a user in a multi-user environment, determines the application layer behavior corresponding to the user, analyzes and verifies the request, the request comprises identity verification, authority check and data validity verification;
The system public service module is used for providing at least one public component and service, wherein the public component and service comprises authority management, log audit, a data access layer, task scheduling, a message queue, cache management and a security framework, the authority management comprises defining different roles, distributing specific authorities for each role, verifying the identity of a user, determining user accessible resources or accessible operations according to the identity of the user, recording the change history of the user authorities, the log audit comprises recording the operation log of an application program, recording error and abnormal information generated when the system runs, recording the performance index of the system, comprising response time and CPU (central processing unit) utilization rate, recording safety-related information, providing a log analysis tool for diagnosing problems and auditing, the data access layer is used for providing a unified data access interface and managing a database connection pool, and the task scheduling is used for managing timing tasks and scheduling tasks;
The bus service gateway interface is used for interacting with a plurality of systems through multiplexing, and a plurality of data streams when transmitting or receiving data are parallel on one TCP connection during interaction.
In a possible implementation manner, the application layer is further used for processing the runtime error and recording an error log and generating standard error information.
In one possible implementation, the system further includes a WEB interaction/presentation layer for displaying a user interface and interacting with a user.
In one possible implementation, the application layer includes an enterprise user management platform;
and the enterprise user management platform is used for maintaining account information of enterprises, managing online application, tracking enterprise project information and online interaction with project management.
In one possible implementation, the application layer also users analyze the source, content and historical behavior of the user request based on the context information of the user request, determine service instances to process the request.
In one possible implementation manner, the application layer is further configured to dynamically adjust a caching policy according to a request frequency of a user request and an update frequency of data, where the caching policy includes maintaining data of a high frequency request in a cache and removing data of a low frequency request from the cache.
In one possible implementation, the system public service module analyzes historical execution data of the task through a machine learning algorithm, predicts execution time and resource requirements of the task, and distributes the task to an optimal execution node.
In one possible implementation manner, the system public service module is used for performing distributed log aggregation and centrally managing logs of a plurality of nodes.
In one possible implementation manner, the bus service gateway interface is further configured to adjust the sending rate according to an adaptive flow control algorithm according to the network feedback flow, where the adaptive flow control algorithm includes a slow start mechanism and a congestion avoidance mechanism of TCP.
In one possible implementation manner, the bus service gateway interface is further configured to define different priority labels for the first type of data and the second type of data, and perform sorting processing before sending, where the first type of data has a higher priority than the second type of data, and the first type of data is data with strong real-time performance, and the second type of data is data with low delay sensitivity.
By applying the application support system provided by the embodiment of the invention, the integration and management of multiple systems can be realized, and various network interfaces such as short messages, weChat, payment and the like are supported. In addition, the system also provides the functions of authority management, log audit and the like so as to improve the safety and maintainability of the system.
Drawings
Fig. 1 is a schematic diagram of an application support system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Fig. 1 is a schematic structural diagram of an application support system according to an embodiment of the present invention, and in the following, a technical scheme of the present invention will be described with reference to fig. 1. The application support system comprises an application layer 1, a system public service module 2 and a bus service gateway interface 3.
The system comprises an application layer 1, a service layer or a service logic layer at the bottom layer to execute specific service after the request is verified, and a response data meeting expected format and protocol standards is constructed after the service logic is executed, wherein the constructed response data comprises data format conversion, error processing and response header setting;
Specifically, the application layer 1 receives a request from a front end or a user, and parses and verifies the request. The request processing comprises preprocessing steps such as identity verification, authority check, data validity verification and the like. After the request is validated, the application layer invokes the underlying service layer or business logic layer to perform the specific business operation. Wherein the business operations include invoking database operations, external service APIs or other third party services, and the like.
After the execution of the business logic is completed, the application layer constructs response data, and the process comprises data format conversion, error processing, setting of a response head and the like, wherein the response construction ensures that the response finally sent to the user accords with the expected format and protocol standard.
The application layer is also used to manage transaction boundaries, ensuring that a series of operations either all succeed or all fail, transaction management being critical to ensure consistency and integrity of the data.
The application layer 1 is also used to handle runtime errors and to log detailed logs for debugging and auditing. Error handling should be able to provide user friendly error information while protecting the sensitive information of the system from leakage.
The application layer 1 is also used for encryption such as encryption communication, SQL injection prevention, XSS attack, etc. Security also includes preventing unauthorized access and protecting user data privacy.
The application layer 1 may further include a buffering mechanism to reduce the back-end load, improve the response speed, and use an asynchronous processing manner, such as a message queue, to improve the concurrent processing capability.
The application layer 1 is also used to provide RESTful APIs or other types of APIs for external system calls. To make it easier for the user to integrate and use these APIs.
Furthermore, the application layer 1 of the application has personalized configuration options such as data access strategy, log level, performance monitoring threshold and the like in a multi-user environment, and displays a plurality of configuration options after receiving the configuration selection message of the user terminal so that each user can customize the behavior of the application layer according to own requirements. Thus, this personalized configuration increases the flexibility and adaptability of the system.
Further, the application layer 1 is further configured to analyze the source, content and history of the user request based on the context information of the user request, and determine a service instance to process the request.
Specifically, based on the context information of the user request, the application layer dynamically selects the optimal service instance by analyzing the source, the content and the historical behavior of the user request in the context information, thereby improving the efficiency and the response speed of the request processing.
Further, the application layer 1 is further configured to dynamically adjust a caching policy according to a request frequency of a user request and an update frequency of data, where the caching policy includes maintaining data of a high frequency request in a cache, and removing data of a low frequency request from the cache.
Specifically, the application layer 1 dynamically adjusts the caching policy according to the frequency of the request and the update frequency of the data, so as to optimize the performance. The dynamic adjustment of the caching strategy includes, but is not limited to, automatically adjusting the expiration time and storage mode of the cache, so as to ensure that the data of the high-frequency request is always in the cache, and the data of the low-frequency request is timely removed from the cache.
Further, the application layer 1 is configured to perform fault tolerance processing through a multi-level error processing mechanism.
Specifically, the application layer 1 provides a multi-level error processing mechanism, so that the errors are captured and processed at different levels, and the robustness of the system is improved. Error processing mechanisms are respectively arranged at an application layer, a service layer and a data layer, so that errors occurring at any level can be captured and processed in time.
Further, in terms of a hardware structure, the application layer comprises an enterprise user management platform;
and the enterprise user management platform is used for maintaining account information of enterprises, managing online application, tracking enterprise project information and online interaction with project management.
Specifically, the enterprise user management platform can identify and classify abnormal states of enterprise accounts according to the bayesian network and the support vector machine model, specifically can collect relevant data including log in, then conduct data cleaning, remove invalid data, fill missing values, standardize data formats and the like, ensure quality and consistency of the data, and then extract useful features such as log in time, geographic positions and the like for subsequent model training. In performing model training, the structure of the bayesian network, i.e., the dependency relationship between nodes (features), is determined. Score and search algorithms (e.g., K2 algorithm) or constraint based algorithms (e.g., PC algorithm) may be used. And during parameter learning, estimating parameters in each conditional probability table. Maximum likelihood estimation or bayesian estimation methods may be used. In training with a support vector machine, a suitable kernel function (e.g., linear kernel, polynomial kernel, RBF kernel) is selected to handle the nonlinear relationship. Parameter tuning, namely, a cross verification method and other methods are used for adjusting super parameters (such as C and gamma) so as to optimize the generalization capability of the model. And then, carrying out model fusion, and fusing the results of the Bayesian network and the support vector machine model, wherein a voting method, a weighted average method or other integration methods can be used to improve the accuracy and the robustness of prediction. And then, evaluating the performance of the model by using indexes such as accuracy, recall, F1 fraction, ROC curve and the like, continuously adjusting model parameters and feature selection according to the evaluation result, and optimizing the performance of the model. And finally, deploying the trained model into a production environment, and monitoring the state of the enterprise account in real time. When the model detects an abnormal state, an early warning mechanism is immediately triggered to inform relevant personnel to further survey and process.
The system public service module 2 provides at least one public component and service, wherein the public component and service comprises authority management, log audit, a data access layer, task scheduling, a message queue, cache management and a security framework, the authority management comprises defining different roles, allocating specific authorities for each role, verifying the identity of a user, determining user accessible resources or accessible operations according to the identity of the user, recording the change history of the user authority, the log audit comprises recording the operation log of an application program, recording error and abnormal information generated when the system runs, recording performance indexes of the system, such as response time and CPU (central processing unit) utilization rate, recording safety related information, providing log analysis tools for diagnosing problems and auditing, the data access layer is used for providing a unified data access interface and managing a database connection pool, the task scheduling is used for managing timing tasks and scheduling tasks, and the security framework is used for providing authentication and authorization.
In particular, the system common services module 2 is an important component of an application support system that provides a variety of common components and services to support efficient operation of different applications. These components and services can be multiplexed across multiple applications, thereby reducing the effort of repeated development and improving the overall stability and security of the system.
The following is a detailed expanded description of the system public service module 2:
the public service module of the system is used for verifying the identity of the user and ensuring that only legal users can access the system resources. And is used to determine which resources the user can access or perform which operations based on the user's role and permissions. Defining different roles and assigning each role a specific right. And the user manages the session of the user, so that the security and durability of the session are ensured. And the change history of the user permission is recorded, so that the audit and the tracking are convenient.
The log audit includes recording an operation log of the application program, including user operation, system operation and the like. And recording errors and abnormal information generated during the running of the system. Performance indicators of the system, such as response time, CPU utilization, etc., are recorded. Security related information such as login attempts, rights changes, etc. is recorded. Log analysis tools are provided to aid in diagnosing problems and auditing.
Further, the system public service module 2 is used for providing a unified data access interface and supporting multiple database types. Object relation mapping is supported, and data access codes are simplified. And managing database transactions to ensure consistency and integrity of data. And optimizing database inquiry and improving data access performance. And managing the database connection pool to improve the connection utilization rate.
Further, the system public service module 2 is used for performing tasks of timing execution, such as data backup, report generation and the like. Tasks that are scheduled for execution, such as cleaning logs every morning. Event trigger based task execution is supported. The monitoring function of the task execution state is provided, and the normal operation of the task is ensured.
Furthermore, the system public service module 2 is used for adjusting the authority of the user in real time according to the service requirement, and carrying out dynamic authority management on the manager according to the roles, the resources and the operation granularity.
Specifically, the system public service module 2 adjusts the authority of the user in real time according to the service requirement, and supports fine-grained authority control according to roles, resources and operations. Meanwhile, the system also provides a history record and audit function of the authority change, and transparency and traceability of the authority management are ensured. Dynamic rights management mechanisms are provided that support fine-grained rights control and real-time updating.
Further, the system common service module 2 centrally manages logs from a plurality of nodes through a distributed log aggregation technology, and provides a powerful log analysis tool. The system supports the functions of real-time log stream processing, log searching and log statistics, and helps operation and maintenance personnel to quickly find and solve problems.
Further, the system public service module 2 optimizes task execution efficiency based on intelligent task scheduling of a machine learning algorithm. The historical execution data of the task is analyzed through a machine learning algorithm, and the execution time and the resource requirement of the task are predicted, so that the task is intelligently distributed to the optimal execution node. The intelligent scheduling mechanism remarkably improves the execution efficiency of tasks and the resource utilization rate of a system.
Specifically, the system public service module 2 first collects historical execution records of tasks, including but not limited to task types, input parameters, start time, end time, execution node information, resource consumption (such as CPU, memory, I/O), task results, and the like. And removing invalid or wrong data records, filling in missing values, and standardizing the data format. Useful features are extracted, such as the type, scale, performance index of the executing node, etc., of the task for subsequent modeling use.
Then, feature selection and model training are performed. During feature selection, features with larger influence on task execution time and resource requirements are screened out through methods such as correlation analysis and mutual information. In model selection, suitable machine learning algorithms are selected, such as linear regression, decision trees, random forests, support Vector Machines (SVMs), neural networks, and the like. During model training, historical data is used for training the model, performance of the model is estimated through methods such as cross validation, and super parameters are adjusted to optimize the model effect.
Then, the predicted execution time and resource requirement are performed, and when the time is predicted, the execution time of the new task is predicted by using the trained model, and the input characteristics can include task type, input parameters, task scale and the like. When the resource demand is predicted, the model is utilized to predict the resources required by the task, such as CPU utilization rate, memory occupation, disk I/O and the like.
And finally, executing the intelligent scheduling strategy. When selecting the nodes, selecting the nodes most suitable for executing the task according to the predicted execution time and the resource requirement. Considerations include the current load of the node, the resource remaining, network delays, etc. When the load balance is ensured, after the task distribution is ensured, the load of each node is kept balanced, and overload of some nodes and idle of other nodes are avoided. During the dynamic adjustment, the state of the node and the progress of the task are continuously monitored in the task execution process, and the task is rescheduled when necessary to cope with the emergency.
Wherein, when optimizing task execution strategy, real-time monitoring and feedback can be performed. The method specifically comprises the steps of establishing a real-time monitoring system, and collecting various indexes in the task execution process, such as actual execution time, resource consumption and the like. And feeding back the actual execution data to the model, continuously updating and optimizing the model, and improving the prediction accuracy. And detecting abnormal conditions in the task execution process through a statistical method or a machine learning model, and timely taking measures to solve the abnormal conditions. The overall performance of the intelligent scheduling mechanism is periodically evaluated, including task execution efficiency, resource utilization, system throughput, and the like. And according to the evaluation result, links such as feature selection, model training, scheduling strategies and the like are continuously improved, and the overall performance of the system is improved.
In one example, a record of the execution of all tasks over the past year is extracted from a database. The data is flushed, missing values are filled, and the time stamp is normalized. The task type, the input parameters, the task scale and the performance of the execution node are selected as the characteristics through the correlation analysis. Training a time prediction model and a resource demand prediction model by using a random forest algorithm. And selecting an optimal execution node according to the prediction result, and distributing tasks. And monitoring the task execution process and collecting actual data. And feeding back the actual data to the model, and continuously optimizing the prediction precision and the scheduling strategy.
Further, the system public service module 2 automatically selects an appropriate cache hierarchy according to the access frequency and importance of different data. The data with high frequency access is stored in the memory buffer memory preferentially, the data with medium frequency access is stored in the distributed buffer memory, and the data with low frequency access is stored in the database buffer memory. The multi-layer cache strategy effectively improves the data access speed and reduces the load of the database. The multi-layer caching strategy supports the combined use of memory caching, distributed caching and database caching.
Specifically, first, cache hierarchy division is performed. The method is divided into a memory cache, a distributed cache and a database cache. Memory caching, which is used to store data for high frequency access, typically uses a memory database such as Redis, memcached. Distributed caching, which is used to store data accessed at intermediate frequencies, distributed caching systems such as APACHE IGNITE, hazelcast, etc. may be used. Database caching-a cache, typically implemented at the database level, for storing low frequency accessed data, such as MySQL query cache.
Then, a data access frequency and importance analysis is determined. The access frequency and importance of each data item is recorded. The time of each data access, data ID, access type, etc. information may be recorded by a log. The access frequency and importance of each data item are counted periodically. Statistics can be performed using batch jobs or real-time stream processing frameworks (e.g., APACHE SPARK, flink). The importance of the data is defined according to business rules, such as key business data, personal information of users, etc., and the importance degree of the data is determined.
Next, a dynamic caching policy is executed. And dynamically selecting an appropriate cache level according to the access frequency and importance of the data. For high frequency access data, the highest access speed is ensured for storage in a memory buffer. For medium frequency access data, the medium frequency access data is stored in a distributed cache, and the access speed and the resource consumption are balanced. For low-frequency access data, the low-frequency access data are stored in a database cache, so that occupation of a memory and a distributed cache is reduced.
And finally, carrying out cache management, and updating the corresponding cache level in time when the data changes. Mechanisms such as cache penetration, cache breakdown, and cache avalanche can be used to ensure data consistency and reliability. Reasonable buffer expiration time is set, and buffer resources are prevented from being occupied for a long time. Cache elimination may be performed using LRU (least recently used) or LFU (least frequently used) policies. Before the system is started or in a peak period, the high-frequency access data is preloaded into the memory cache, so that the performance influence caused by cold start is reduced.
Further, the system public service module 2 can perform various authentications, such as user name/password, OAuth, openID Connect, etc., and provide a fine-grained authority control mechanism. The system also adopts encryption technology and security audit function to ensure the security and integrity of user data.
The bus service gateway interface 3 is used for interacting with a plurality of systems through multiplexing, and a plurality of data streams when transmitting or receiving data are parallel on one TCP connection during the interaction.
The bus service gateway interface 3 may be externally connected with various interfaces, such as short messages, weChat, payment, environment authentication, etc.
Further, the bus service gateway interface 3 replaces the conventional text format (e.g., JSON) by a binary format. Meanwhile, a compression algorithm can be introduced to compress the message body, so that the transmission data volume is further reduced. The size of the data packet is reduced, and the analysis overhead is reduced.
Further, the bus service gateway interface 3 is the basic architecture of a high-performance serialization/deserialization library, including a core interface and a data structure. Any data object is serialized into a byte stream and then the byte stream is de-serialized into the original data object.
Commonly used serialization formats are defined, such as binary format, JSON, protocol Buffers, etc. And supporting the expansion of the custom data types. With SIMD instruction sets to optimize performance, modern processors support SIMD (Single Instruction Multiple Data) instruction sets, which can process multiple data points in a single instruction, thereby greatly improving performance. Common SIMD instruction sets include Intel's SSE, AVX, and ARM's NEON.
Further, by referencing the multiplexing mechanism of HTTP/2, multiple data streams are allowed to be parallel on one TCP connection. Therefore, the problem of blocking the queue head can be effectively avoided, and the concurrent processing capacity is improved. Transmitting multiple requests or responses simultaneously over one connection reduces the time cost of establishing and disconnecting connections.
Specifically, each request or response is treated as an independent data stream. The data stream is divided into smaller units called frames. Frame header, which contains information such as frame type, length, stream identifier, etc. Frame data, actual data content. The transmission quantity of the header information is reduced, and the transmission efficiency is improved. The client establishes a TCP connection with the server. Stream identifier-a unique identifier for each data stream. The data streams may be prioritized to ensure critical data priority transmission. And controlling the transmission rate of the data stream through the window updating frame to avoid congestion.
In one example, session initialization is first performed, the client sends an initial frame (e.g., set maximum frame size, maximum concurrent stream number, etc.), and the server replies with an acknowledgement frame. The client and server may concurrently transmit multiple data streams, each consisting of multiple frames. Errors in a single data stream are handled without affecting other data streams. Handling errors of the entire connection may require reestablishing the connection.
Further, the bus service gateway interface 3 is further configured to adjust the sending rate according to an adaptive flow control algorithm according to the network feedback flow, where the adaptive flow control algorithm includes a slow start mechanism and a congestion avoidance mechanism of TCP.
The slow start mechanism is to increase the sending rate quickly in the initial stage until network congestion is detected. The congestion avoidance mechanism is to gradually increase the sending rate after detecting network congestion, so as to avoid congestion again. The fast retransmission mechanism is to retransmit the lost data segment immediately when three duplicate ACKs are received. The fast recovery mechanism is to gradually increase the sending rate after fast retransmission and recover to the pre-congestion level. In one example, the parameters are first initialized and an initial congestion window (Congestion Window, CWND) is set to 1 MSS (Maximum Segment Size). The slow start threshold (Slow Start Threshold, SSTHRESH) is initialized to a larger value, such as 64K. The receive Window (Receiver Window) is initialized RWND, which is the maximum receive Window of the Receiver.
For the slow start mechanism, the congestion window is increased by 1 MSS per ACK received. Checking the congestion window, entering a congestion avoidance phase if the congestion window is greater than the slow start threshold.
For the congestion avoidance mechanism, the congestion window is increased by 1/CWND MSSs per ACK received. Checking the congestion window, namely reducing the congestion window if the congestion window reaches the receiving window or the network congestion flag is triggered.
For the fast retransmission mechanism, when three duplicate ACKs are received, the lost data segment is retransmitted immediately.
For the fast recovery mechanism, the slow start threshold is set to half of the congestion window, then the congestion window is set to the slow start threshold, and the congestion window is continuously increased.
A predictive model is then built. Network state information such as RTT (Round-Trip Time), packet loss rate, bandwidth, etc. is collected periodically. Future network states are predicted using machine learning models (e.g., linear regression, decision trees, neural networks, etc.). And adjusting the sending rate in advance according to the prediction result, so as to avoid congestion.
Further, the bus service gateway interface 3 is further configured to define different priority labels for the first type of data and the second type of data, and perform sorting processing before sending, where the first type of data has a priority higher than that of the second type of data, and the first type of data is data with strong real-time performance, and the second type of data is data with low delay sensitivity.
Specifically, priority tags are defined, different priority tags being defined for different types of data. A priority queue (e.g., a binary heap) is used to manage the data to be transmitted. The data is ordered according to priority before transmission. And determining the sending sequence of the data according to the priority label.
In one example, different priority tags are defined for different types of data. For example, high priority, real-time data such as audio video streams. Medium priority-important but not real-time data, such as online chat messages. Low priority, less sensitive data to delay, such as file downloads. The priority queue is then used to manage the data to be transmitted. The priority queue may ensure that high priority data is always processed first. The data is then ordered according to priority before transmission. Built-in priority queue data structures, or custom ordering algorithms may be used. And finally, determining the sending sequence of the data according to the priority label. Ensuring that high priority data is sent preferentially and low priority data is delayed appropriately.
Further, the bus service gateway interface 3 is further configured to make zero copies by zero copy technology.
Specifically, in order to reduce the number of memory copies, the system call overhead is reduced. With the zero copy nature of the operating system, data is transferred directly from the network buffer to the application without going through additional memory copy steps. For example, sendfile () function in the Linux system implements this function.
Further, the bus service gateway interface 3 is also used for forward error correction coding. And after the receiving end receives the data, even if part of the data is lost, the original data can be recovered through the redundant information, thereby avoiding unnecessary retransmission requests. And the retransmission times caused by network packet loss are reduced, and the transmission efficiency is improved.
Therefore, an efficient data exchange protocol can be constructed for the bus service gateway interface, the communication performance among different systems is obviously improved, and the network delay and the bandwidth consumption are reduced. Such protocols are particularly suitable for high-concurrency, large-data-volume transmission scenarios, such as cloud computing, internet of things, and the like.
By applying the application support system provided by the embodiment of the invention, the integration and management of multiple systems can be realized, and various network interfaces such as short messages, weChat, payment and the like are supported. In addition, the system also provides the functions of authority management, log audit and the like so as to improve the safety and maintainability of the system.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of function in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. An application support system, the system comprising:
The system comprises an application layer, a service logic layer, a response data constructing module and a response data processing module, wherein the application layer receives a request from a front end or a user in a multi-user environment, determines the application layer behavior corresponding to the user, analyzes and verifies the request, the request comprises identity verification, authority check and data validity verification;
The system public service module is used for providing at least one public component and service, wherein the public component and service comprises authority management, log audit, a data access layer, task scheduling, a message queue, cache management and a security framework, the authority management comprises defining different roles, distributing specific authorities for each role, verifying the identity of a user, determining user accessible resources or accessible operations according to the identity of the user, recording the change history of the user authorities, the log audit comprises recording the operation log of an application program, recording error and abnormal information generated when the system runs, recording the performance index of the system, comprising response time and CPU (central processing unit) utilization rate, recording safety-related information, providing a log analysis tool for diagnosing problems and auditing, the data access layer is used for providing a unified data access interface and managing a database connection pool, and the task scheduling is used for managing timing tasks and scheduling tasks;
The bus service gateway interface is used for interacting with a plurality of systems through multiplexing, and a plurality of data streams when transmitting or receiving data are parallel on one TCP connection during interaction.
2. The system of claim 1, wherein the application layer is further configured to process runtime errors and record error logs, and generate standard error information.
3. The system of claim 1, further comprising a WEB interaction/presentation layer for displaying a user interface and interacting with a user.
4. The system of claim 1, wherein the application layer comprises an enterprise user management platform for maintaining account information of enterprises, managing online applications and tracking of enterprise project information, and interacting online with project management.
5. The system of claim 1, wherein the application layer further comprises a user to analyze the source, content, and historical behavior of the user request based on the context information of the user request to determine a service instance to process the request.
6. The system of claim 1, wherein the application layer is further configured to dynamically adjust a caching policy based on a request frequency of the user requests and an update frequency of the data, the caching policy including maintaining the high frequency requested data in a cache and removing the low frequency requested data from the cache.
7. The system of claim 1, wherein the system common service module analyzes historical execution data of the tasks through a machine learning algorithm, predicts execution time and resource requirements of the tasks, and assigns the tasks to optimal execution nodes.
8. The system of claim 1, wherein the system common service module is configured to perform distributed log aggregation to centrally manage logs of the plurality of nodes.
9. The system of claim 1, wherein the bus service gateway interface is further configured to adjust the sending rate according to an adaptive flow control algorithm based on network feedback traffic, wherein the adaptive flow control algorithm comprises a slow start mechanism, a congestion avoidance mechanism, of TCP.
10. The system of claim 1, wherein the bus service gateway interface is further configured to define different priority labels for a first type of data and a second type of data, and perform a sorting process before sending, where the first type of data has a higher priority than the second type of data, where the first type of data is data with a high real-time performance, and the second type of data is data with a low delay sensitivity.
CN202411860628.9A 2024-12-17 2024-12-17 Application support system Pending CN119808041A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411860628.9A CN119808041A (en) 2024-12-17 2024-12-17 Application support system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411860628.9A CN119808041A (en) 2024-12-17 2024-12-17 Application support system

Publications (1)

Publication Number Publication Date
CN119808041A true CN119808041A (en) 2025-04-11

Family

ID=95278095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411860628.9A Pending CN119808041A (en) 2024-12-17 2024-12-17 Application support system

Country Status (1)

Country Link
CN (1) CN119808041A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080052664A1 (en) * 2006-08-25 2008-02-28 Ritwik Batabyal e-ENABLER PRESCRIPTIVE ARCHITECTURE
CN115695139A (en) * 2022-12-29 2023-02-03 安徽交欣科技股份有限公司 Method for enhancing micro-service system architecture based on distributed robust
CN118070053A (en) * 2024-03-01 2024-05-24 江西欧易科技有限公司 Remote management device and method for rotational molding machine supporting multi-user cooperation
CN118473839A (en) * 2024-07-15 2024-08-09 深圳市连用科技有限公司 Security management method and system for file cloud system
CN118802549A (en) * 2024-04-26 2024-10-18 中国移动通信集团设计院有限公司 Optimization method and device of authentication and authorization system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080052664A1 (en) * 2006-08-25 2008-02-28 Ritwik Batabyal e-ENABLER PRESCRIPTIVE ARCHITECTURE
CN115695139A (en) * 2022-12-29 2023-02-03 安徽交欣科技股份有限公司 Method for enhancing micro-service system architecture based on distributed robust
CN118070053A (en) * 2024-03-01 2024-05-24 江西欧易科技有限公司 Remote management device and method for rotational molding machine supporting multi-user cooperation
CN118802549A (en) * 2024-04-26 2024-10-18 中国移动通信集团设计院有限公司 Optimization method and device of authentication and authorization system
CN118473839A (en) * 2024-07-15 2024-08-09 深圳市连用科技有限公司 Security management method and system for file cloud system

Similar Documents

Publication Publication Date Title
US11836533B2 (en) Automated reconfiguration of real time data stream processing
US20220083380A1 (en) Monitoring and automatic scaling of data volumes
Han et al. Evaluating blockchains for IoT
US10158541B2 (en) Group server performance correction via actions to server subset
US20200287794A1 (en) Intelligent autoscale of services
US20050138198A1 (en) Methods, apparatuses, systems, and articles for determining and implementing an efficient computer network architecture
US8589537B2 (en) Methods and computer program products for aggregating network application performance metrics by process pool
US8312138B2 (en) Methods and computer program products for identifying and monitoring related business application processes
JP2008502044A (en) Performance management system and performance management method in multi-tier computing environment
US11032392B1 (en) Including prior request performance information in requests to schedule subsequent request performance
WO2010007088A1 (en) Asymmetric dynamic server clustering with inter-cluster workload balancing
US12105735B2 (en) Asynchronous accounting method and apparatus for blockchain, medium and electronic device
US9059941B1 (en) Providing router information according to a programmatic interface
US8661138B2 (en) Group based allocation of network bandwidth
US11558385B2 (en) Automatic transaction processing failover
US10248508B1 (en) Distributed data validation service
US11301462B1 (en) Real-time data validation using lagging replica databases
US20220092080A1 (en) Preventing data loss in event driven continuous availability systems
CN118612076A (en) Data transmission method, device, equipment and medium
US10348814B1 (en) Efficient storage reclamation for system components managing storage
CN112969172A (en) Communication flow control method based on cloud mobile phone
Lee et al. Dynamic multi-resource optimization for storage acceleration in cloud storage systems
CN119808041A (en) Application support system
Liu et al. On causes of GridFTP transfer throughput variance
US11755397B2 (en) Systems and methods for processing of messages subject to dead letter queues in representational state transfer architectures to prevent data loss in cloud-based computing environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination