CN117176811B - Server architecture, system and method for blocking and monitoring multiple clients and controlling multiple hardware - Google Patents
Server architecture, system and method for blocking and monitoring multiple clients and controlling multiple hardware Download PDFInfo
- Publication number
- CN117176811B CN117176811B CN202311444873.7A CN202311444873A CN117176811B CN 117176811 B CN117176811 B CN 117176811B CN 202311444873 A CN202311444873 A CN 202311444873A CN 117176811 B CN117176811 B CN 117176811B
- Authority
- CN
- China
- Prior art keywords
- client
- instruction
- thread
- socket
- hardware
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Computer And Data Communications (AREA)
- Multi Processors (AREA)
Abstract
The invention discloses a server side architecture, a system and a method for blocking and monitoring multiple clients and controlling multiple hardware, wherein the server side architecture comprises a network interaction module and a hardware control module, the network interaction module comprises a global structure array, a ring pipeline, a client connection monitoring thread and a client instruction analysis thread, the client connection monitoring thread blocks the connection request of a new client, the client instruction analysis thread blocks when idle, and receives, reads and analyzes a client instruction when awakened; the hardware control module comprises a concurrent instruction queue, a sequential instruction queue, and concurrent execution threads and sequential execution threads for executing client instructions. The architecture overcomes the problems that the existing multi-client communication server architecture is not light enough, memory and processor resources are occupied continuously when no client connection request and client instruction are available, the architecture is light enough, a plurality of threads are in a blocking state when the architecture is idle, the system operation efficiency is high, the energy consumption is low, and the instructions can be executed concurrently or sequentially.
Description
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a server architecture, a system, and a method for blocking and monitoring multiple clients and controlling multiple hardware.
Background
In the field of communication such as avionics and 5G radio frequency, there are a great number of communication demands for concurrently controlling one or more hardware devices through a C/S architecture network, in which a client generally uses a client program of an upper computer interface to issue a control instruction, and then the client device on which the server program is installed parses the instruction and controls a specific controlled hardware device by using a driver file descriptor.
The traditional multi-path IO multiplexing server-side architecture generally distributes a separate process for each client to process when receiving multi-client requests, and the processing mode is simple in theory, but the cost of equipment resources in practice is large, and the memory resources of embedded main control equipment used as a communication server-side are very tight, so that the embedded main control equipment has a severe lightweight requirement on running server-side programs. The patent publication No. CN113553199A, publication No. 2021, 10 month 26, entitled "method and apparatus for handling multiple client accesses using asynchronous non-blocking mode" discloses a method for handling multiple client accesses, including active set active thread and thread pool module, auxiliary set active thread and thread pool module, multiple servers and load balancing module, which can solve multiple client simultaneous access requests to a certain extent by using asynchronous non-blocking mode, however, this patent uses non-blocking design, and the threads cannot enter a fully dormant state when no client requests, and can continuously occupy processor and system memory resources.
Disclosure of Invention
The invention overcomes the problems that the existing multi-client communication service end architecture is not lightweight enough, memory and processor resources are occupied continuously when no client connection request and client instruction are available, and the like, and provides a service end architecture, a system and a method for blocking and monitoring multiple clients and controlling multiple hardware.
In order to achieve the above purpose, the present invention adopts the following scheme:
the server side architecture for blocking and monitoring multiple clients and controlling multiple hardware comprises a network interaction module and a hardware control module, wherein:
The network interaction module comprises:
the global structure array comprises element structure members including a socket of a connected client and client information, wherein the socket is a blocking socket;
a nameless pipe including a read port and a write port;
a client connects a monitoring thread, a blocking monitoring socket is created, then an accept function is called to block and monitor the connection request of a new client, the connection request is blocked and waited when no new client is connected, the socket of the new client is awakened and returned when the new client requests connection, the socket and the client information of the connected client are saved in a global structure array, any data is written into a writing port of a ring-name pipeline to enable the writing port to be converted into a readable state, and then the accept function is called again to continue blocking and monitoring and waiting the connection request of other clients;
the client instruction analysis thread defines an empty file set fdSet of fd_set type, sequentially adds the read port of the nameless pipeline and all sockets in the global structure array into the file set fdSet, then calls a function select to monitor whether files in the file set fdSet are updated to a readable state, and when a client connection request or a client instruction is not received, the client instruction analysis thread is blocked in the function select calling, and wakes up and executes operations when the following conditions occur:
The system is awakened due to the occurrence of the abnormal condition of the system, and corresponding abnormal processing is carried out according to the abnormal return value and the actual abnormal condition;
the method comprises the steps that a ring-name pipeline reading port or a client socket is awakened in a readable state, a function select screens files in a file set fdSet and only stores readable files therein, whether the ring-name pipeline reading port is still in the file set fdSet is firstly confirmed, if yes, the ring-name pipeline reading port is emptied to restore the ring-name pipeline reading port to an unreadable state, otherwise, whether all client sockets in a global structure array are still in the file set fdSet is confirmed one by one, the instruction content received by the socket still in the file set fdSet is read and analyzed, then the file is cached, and the result of analyzing the instruction comprises controlled equipment and a control mode designated by the instruction; then clearing the file set fdSet, adding all sockets in the read port and the global structure array of the nameless pipeline into the file set fdSet again, and calling the function select again to monitor the file set fdSet for the next round to wait for a new readable file to appear in the file set fdSet;
the hardware control module includes:
the concurrent instruction queue and the sequential instruction queue are used for classifying and caching the analyzed instructions, if the instructions relate to access to critical resources, the instructions are cached to the sequential instruction queue, and otherwise, the instructions are cached to the concurrent instruction queue;
The concurrent execution thread reads the instructions in the concurrent instruction queue in a first-in first-out mode and concurrently controls the controlled hardware by calling the corresponding controlled hardware driving interface;
a sequential execution thread which reads the instructions in the sequential instruction queue in a first-in first-out mode, sequentially controls the controlled hardware by calling a corresponding controlled hardware driving interface, and starts each instruction in the sequential instruction queue after the execution of the last instruction is finished;
and each thread of the client connection monitoring thread, the client instruction analysis thread, the concurrent execution thread and the sequential execution thread belongs to the same process and asynchronously and concurrently operates.
Preferably, the thread attributes of the client connection monitor thread, the client instruction analysis thread, the concurrent execution thread and the sequential execution thread are all set to PTHREAD_CANCEL_ASYNCHRONOUS, wherein when any thread runs in error, the call functions PThread_cancel and PThread_join suspend the running of all other threads and reclaim thread resources.
Preferably, the network interaction module shields SIGPIPE, SIGQUIT, SIGINT signals of the system in advance.
Preferably, the client instruction analysis thread reads a client instruction using function read, recv or recvfrom, and transmits the buffer capacity as a read length parameter into the function, and writes the client socket using function write, send or sendto; if the return value of the calling read-write function is less than or equal to 0, skipping the reading and writing of the client and closing the socket, then finding and clearing the element structure information corresponding to the client from the global structure array, and then continuing to monitor and process the sockets of other clients.
Preferably, the architecture includes a read-write protection mechanism for client sockets in the global structure array, which uses file LOCKs corresponding to each socket one by one, and places read-write operations of the client sockets in a critical section formed between a LOCK F (socket, f_lock, 0) and a LOCK F (socket, f_unlock, 0), where the socket is a protected socket.
Preferably, the network interaction module creates a semaphore sem_connecticlt for indicating the number of connected clients, the initial value of the semaphore sem_connecticlt is 0, V-operates the semaphore sem_connecticlt once after a new client connection request is monitored and connection is established, P-operates the semaphore sem_connecticlt once after an exit instruction of a connected client is received, closes the client socket, and clears element structure member information of the client in the global structure array, the client instruction analysis thread performs P-operation on the semaphore sem_connecticlt once after all client instructions are read, and when the value of the semaphore sem_connecticlt is less than 0, the client instruction analysis thread enters a blocking state, and additionally performs V-operation on the semaphore sem_connecticlt after the thread is awakened.
Preferably, the blocking listens to multiple clients and controls the server architecture of multiple hardware, and is characterized in that the operations of reading or modifying elements of the global structure array are all completed in a critical section formed by semaphores or mutex locks.
Preferably, the hardware control module further creates a semaphore b and a semaphore s with initial values of 0, wherein the semaphore b and the semaphore s are respectively used for indicating the instruction numbers in the current concurrent instruction queue and the sequential instruction queue, the client instruction analysis thread performs a V operation on the semaphore b after buffering an analyzed instruction in the concurrent instruction queue, performs a V operation on the semaphore s after buffering an analyzed instruction in the sequential instruction queue, and performs a P operation on the semaphore b before reading an instruction in the concurrent instruction queue by the concurrent execution thread, and performs a P operation on the semaphore s before reading an instruction in the sequential instruction queue by the sequential execution thread.
The present application also provides a system for blocking listening to multiple clients and controlling multiple hardware, comprising:
the user upper computer comprises a plurality of client devices with network communication functions and sends user instructions through a network;
The controlled hardware is used for providing a hardware driving interface and a driving module and is used for completing corresponding hardware operation;
the architecture design of the embedded main control network equipment adopts the server-side architecture which is provided by any one of the scheme or the preferred scheme and is used for blocking and monitoring multiple clients and controlling multiple hardware, the embedded main control network equipment runs a server-side program, is connected with the controlled hardware and is loaded with the driving module, the server-side program is used for receiving and analyzing the instruction sent by the user upper computer to obtain hardware control information, and a corresponding hardware driving interface and the driving module are called according to the hardware control information to enable the controlled hardware to complete corresponding hardware operation.
The present application also provides a method for blocking and listening to multiple clients and controlling multiple hardware, based on the server architecture for blocking and listening to multiple clients and controlling multiple hardware provided by any one of the above schemes or preferred schemes, the method includes the following steps:
s1: defining a global structure array for storing client information and blocking sockets of connected clients, and creating blocking monitoring sockets by a client connection monitoring thread; the client instruction analysis thread creates a file set fdSet of the fd_set type, and creates an anonymous pipe, wherein the anonymous pipe comprises a read port pipe [0] and a write port pipe [1];
S2: client connection monitoring thread call function accept blocking waiting for new client connection request; the client instruction analyzes the socket and the pipe [0] in the thread statistics global structure array and adds the socket and the pipe into the file set fdSet, and then calls a function select to block and wait for updating the readable file in the file set fdSet; the concurrent execution thread and the sequential execution thread block the waiting concurrent instruction queue and the sequential instruction queue respectively to store instructions;
s3: when a connection request of a new client is received, a client connection monitoring thread is awakened, a socket of the new client returned by an accept function and client information are updated into a global structure array, and any character is written into a write port pipe [1] of the anonymous pipe to enable a read port pipe [0] of the anonymous pipe in a file set fdSet to be in a readable state; when receiving an instruction sent by a connected client through a socket, the corresponding socket becomes a readable state;
s4: the client instruction analysis thread is awakened, a function select screens from a file set fdSet and only reserves readable files therein, confirms whether a ring pipeline read port pipe [0] is still in the screened readable file set fdSet, clears the ring pipeline read port pipe [0] and returns to the step S2 if yes, otherwise, confirms whether each client socket in the global structure array is in the readable file set fdSet one by one, reads and analyzes the instruction content of the socket still in the readable file set fdSet, caches the instruction which does not relate to critical resources into a concurrent instruction queue, wakes up a concurrent execution thread to read and execute the instruction in the concurrent instruction queue, caches the instruction which relates to the critical resources into a sequential instruction queue, and wakes up the sequential execution thread to read and execute the instruction in the sequential instruction queue; returning to step S2 when all instructions have been buffered in the instruction queue.
The invention at least comprises the following beneficial effects: (1) The single-process structural design is adopted, so that excessive memory resources are not occupied, and the weight is light enough; (2) All threads are in a blocking state when idle, and do not continuously occupy a CPU, so that the overall operation efficiency of a communication system and communication equipment is improved; (3) Whether the client instruction relates to the access of the critical resource is distinguished, and by setting a sequential instruction queue or a concurrent instruction queue, the race problem generated by the simultaneous access to the same critical hardware resource is avoided, and the order of instruction execution is ensured to be consistent with the receiving order; (4) The running, blocking or stopping of each thread and task are asynchronous and concurrent, so that the thread resources in a blocking state can be smoothly recovered when errors occur without mutual interference; (5) By setting the file lock, the access mutual exclusion of the same client socket is realized while the lightweight and code reentrant are ensured; (6) The method can complete flexible control of a plurality of hardware devices while monitoring the multi-client instruction, and has strong adaptability to embedded communication.
Drawings
FIG. 1 is a schematic diagram of a server end architecture for blocking and listening to multiple clients and controlling multiple hardware;
FIG. 2 is a schematic diagram of a system for blocking and listening to multiple clients and controlling multiple hardware;
fig. 3 is a flowchart of a method for blocking and monitoring multiple clients and controlling multiple hardware.
Detailed Description
The present invention is described in further detail below with reference to the drawings to enable those skilled in the art to practice the invention by referring to the description.
As shown in fig. 1, a server architecture for blocking and monitoring multiple clients and controlling multiple hardware includes a network interaction module and a hardware control module, wherein:
the network interaction module comprises:
the global structure array comprises element structure members including sockets of connected clients and client information, wherein the sockets are blocking sockets; the information in the global structure array is shared by all threads in the whole process, each element structure member records a socket of a connected client and basic information of the client, the recorded client socket is a blocking socket, the connection relation between a server and the client can be enabled to meet the blocking condition by using the blocking socket, the situation that the thread can normally enter a blocking dormant state at the position of calling a select monitoring client socket when a client request and a client instruction are not received is guaranteed, the element structure members are correspondingly added in the global structure array when a new client is connected so as to record the information of the client, when the client is disconnected, the corresponding element structure member information is deleted so as to ensure that the information in the global structure array is synchronous with the information of the connected client in real time, and in the actual operation of the server, the access modification of the global structure array is mutually exclusive, so that the code reentrant can be guaranteed.
A nameless pipe including a read port and a write port; the innominate pipeline is a special file of a Unix-like system, is created by a pipe () function and comprises two fixed ports, wherein pipe [0] is a read port, pipe [1] is a write port, the corresponding read port can be changed into a readable state by writing data into the write port, the data written into the write port can be read out from the read port according to the writing sequence, and the innominate pipeline only occupies little memory resources and can release space along with the end of the process.
A client connection monitoring thread which creates a blocking monitoring socket, then calls an accept function to block a connection request of a new client, blocks waiting when no new client is connected, wakes up and returns a socket of the new client when the new client requests connection, stores the socket and client information of the connected client into a global structure array, writes any data into a write port of a nameless pipeline to enable the read port to be converted into a readable state, and then calls the accept function again to continue blocking monitoring to wait for connection requests of other clients; the accept function can enter a blocking state when a new client connection request is not detected, so that the whole client connection monitoring thread enters the blocking state and is suspended by the system to give out occupied memory and computing resources until the operation of the accept function is restored after the new client connection request is received next time, the client connection monitoring thread is awakened to process the connection request of the new client, any data can be written into a write port of the anonymous pipeline while the socket and client information of the new client are updated into the global structure array, the written data mainly plays a role of enabling a read port to enter a readable state, and the data content can have no practical significance.
The client instruction analysis thread defines an empty file set fdSet of fd_set type, sequentially adds the read port of the nameless pipeline and all sockets in the global structure array into the file set fdSet, then calls a function select to monitor whether files in the file set fdSet are updated to a readable state, and when a client connection request or a client instruction is not received, the client instruction analysis thread is blocked in the function select calling, and wakes up and executes operations when the following conditions occur:
the system is awakened due to the occurrence of the abnormal condition of the system, and corresponding abnormal processing is carried out according to the abnormal return value and the actual abnormal condition; the abnormal condition of the system generally comprises abnormal closing of the client, interruption by a signal, occurrence of an event and the like, when the system is abnormal, a corresponding abnormal return value is returned, at the moment, the actual abnormal condition information is obtained according to the abnormal return value, and corresponding abnormal processing operation is carried out according to the system log, so that the system is restored to a normal running state.
The method comprises the steps that a ring-name pipeline reading port or a client socket is awakened in a readable state, a function select screens files in a file set fdSet and only stores readable files therein, whether the ring-name pipeline reading port is still in the file set fdSet is firstly confirmed, if yes, the ring-name pipeline reading port is emptied to restore the ring-name pipeline reading port to an unreadable state, otherwise, whether all client sockets in a global structure array are still in the file set fdSet is confirmed one by one, the instruction content received by the socket still in the file set fdSet is read and analyzed, then the file is cached, and the result of analyzing the instruction comprises controlled equipment and a control mode designated by the instruction; the file set fdSet is then cleared and all sockets in the read port of the nameless pipe and the global structure array are rejoined into the file set fdSet, and then the function select is called again to monitor the file set fdSet for a next round to wait for a new readable file to appear therein.
The client instruction analysis thread queries information in the file set fdSet by calling the function select, and the condition that the function select continues to run downwards is that a readable file appears in the file set fdSet, namely, at least one of a socket or a read port of a nameless pipeline in the file set fdSet becomes a readable state, otherwise, the whole client instruction analysis thread is blocked along with the blocking of the function select, and the calling mode of the function select is select (maxfd+1, & fdSet, NULL, NULL, NULL), wherein maxfd is the maximum file descriptor in the file set fdSet.
When the read port of the ring pipe is readable, the client connection monitoring thread receives the connection request of the new client and writes data into the write port of the ring port, at the moment, the socket and the client information of the new client are recorded in the global structure array, the read port of the ring pipe is emptied to normally wait for the next new client connection request, then the client socket in the global structure array is completely updated into the file set fdSet, at the moment, the identity of the new client is converted into the connected client, the sent instruction can be normally received, the function select is re-invoked after the operation is completed, and if the readable socket is queried, the client instruction is normally read, and because the instruction is not necessarily sent immediately after the new client requests to be connected, the readable socket is probably not available, and the thread enters the blocking state again.
When the state of the socket is readable, the client instruction analysis thread calls FD_ISSET (ct [ i ]. S, & fdSet) to sequentially screen the readable socket and read the corresponding client instructions until all the client instructions are received, analyzes the received client instructions, classifies the client instructions according to the hardware equipment which needs to be controlled, wherein the instruction which does not involve critical resource access is cached to a concurrent instruction queue of the hardware control module, the instruction which involves critical resource access is cached to a sequential instruction queue of the hardware control module, the critical resource is a hardware resource which cannot be accessed by two or more threads simultaneously, and the client instructions which involve operating the critical resources are ensured not to generate abnormality due to resource conflict through a mechanism of mutual exclusion access. Then the client connection monitoring thread of the network interaction module continues to block monitoring and wait for new client connection, the client instruction analysis thread of the network interaction module continues to call the function select to block and monitor the readable socket, and the execution of the instruction is handed to the hardware control module for execution. When receiving the read client instruction, the client instruction analysis thread can transmit the maximum length of the buffer area for temporarily storing the client instruction, so that all data in the screened readable socket can be read out once, secondary storage and transportation are not carried out in the analysis process, and whether the client is abnormally closed or the read is interrupted by an interrupt signal can be judged by receiving the return value of the read function. The protocol frame adopted by the instruction interacted with the client gives the load length at the frame head, so that the storage starting point and the storage end point of each instruction content in the buffer area can be rapidly identified, whether all fragments of the instruction frame are read once or not can be rapidly judged in the process of flow burst, and therefore, the screened other client data are read first and then returned for carrying out secondary reading and combined analysis on the instruction, and the fairness and the instantaneity of multi-client response are ensured.
The hardware control module includes:
the concurrent instruction queue and the sequential instruction queue are used for classifying and storing the analyzed instructions, storing the analyzed instructions into the concurrent instruction queue if the instructions do not relate to the access to the critical resources, and storing the analyzed instructions into the sequential instruction queue if the instructions do not relate to the access to the critical resources; the instruction is received and analyzed by a client instruction analysis thread, then the system judges whether the resources needed by the instruction relate to critical resources or not to be cached into a concurrent instruction queue or a sequential instruction queue according to the fact that the resources needed by the instruction are cached into a first-in first-out read-write mode, when the instructions in the concurrent instruction queue and the sequential instruction queue are read and executed, a plurality of instructions can be sequentially read and executed in the concurrent instruction queue at one time, and the next instruction can be read and executed only after the current instruction is executed in the sequential instruction queue.
The concurrent execution thread reads the instructions in the concurrent instruction queue in a first-in first-out mode and concurrently controls the corresponding controlled equipment by calling the corresponding hardware driving interface; the client instructions in the concurrent instruction queue do not relate to critical resources, so that the concurrent instructions can be executed concurrently, each instruction can be independently and rapidly executed, the execution efficiency can be greatly improved, a part of the instructions which are firstly put into the concurrent instruction queue are read and executed according to the instruction processing capacity of a concurrent execution thread, when the execution of the instructions is completed, the instructions are continuously read according to the first-in first-out principle to be executed, the simultaneously executed instructions are maintained in a certain quantity until the concurrent instruction queue is emptied, the used hardware driving file descriptor is configured by a system, the hardware equipment can be directly controlled through the descriptor, the descriptor used by a part of the client instructions is contained in the client information, and the calling in the system configuration file is not needed.
And the sequential execution thread reads the instructions in the sequential instruction queue in a first-in first-out mode and sequentially controls the corresponding controlled equipment by calling the corresponding hardware driving interface. The client instruction in the sequential instruction queue at least relates to the use of a critical resource, if the critical resource needed by the next client instruction is being accessed by the currently executed instruction, the next client instruction can be executed after the critical resource is released by the current instruction, so that the sequential execution thread executes the instructions in the sequential instruction queue one by one according to the first-in first-out principle, and the fairness of the execution of the client instructions is ensured.
And each thread of the client connection monitoring thread, the client instruction analysis thread, the concurrent execution thread and the sequential execution thread belongs to the same process and asynchronously and concurrently operates. All threads run asynchronously and concurrently and are independent of each other, and the reading and analyzing of the client instructions by the client instruction analyzing thread do not need to wait for the concurrent execution thread or the sequential execution thread to execute the instructions cached in the queue, so that the method has extremely high running efficiency.
In the service end operation process adopting the framework, a client connection monitoring thread and a client instruction analysis thread are in a blocking state when being idle, if a client instruction of a connected client is received, the client instruction analysis thread is directly awakened, a function select continues to run downwards, whether a ring pipe reading port is readable or not is confirmed, if the ring pipe reading port is readable, a connected client socket and the ring pipe reading port in a global structure array ct [ ] are added into a file set fdSet together for calling the function select again, and if no new client initiates a connection request and the connected client socket receives a client instruction, the ring pipe is awakened from the function select calling place again, the readable socket in the file set fdSet is received, the corresponding client instruction is read and analyzed, the client instruction is sent to a hardware control module, and the hardware control module executes the client instructions by using a parallel execution thread and a sequential execution thread respectively according to whether each instruction relates to critical resources. Because the socket of the new client is not stored in the file set fdSet in advance, the connection request and the client instruction sent by the new client cannot wake up the client instruction analysis thread directly, and meanwhile, if the instruction sent by the connected client wakes up the client instruction analysis thread, the instruction of the new client can also be received by the client instruction analysis thread, in order to avoid missing the instruction reading of the new connected client due to the blocking of the client instruction analysis thread when the old client does not have a request, data is written into a write port of the nameless pipeline, the read port of the nameless pipeline in the file set fdSet is detected to be in a readable state, the thread wakes up in time to update the socket of the new client into the file set fdSet, and the instruction of the new client can be received and read again by calling the function select to continue to execute subsequent operations.
In the architecture, the client connection monitoring thread and the client instruction analysis thread adopt the design of blocking in idle, and occupied memory and calculation resources can be released when no connection request and client instruction exist, so that the overall operation efficiency of the communication system is improved, and the energy consumption of the system is reduced. The global structure array is used for storing the client socket, and the client instruction analysis thread is awakened through the nameless pipeline, so that the connection request and the sent instruction of a new client cannot be missed due to the blocking of the client instruction analysis thread, and the response can be quickly made. By classifying the client instructions and executing the client instructions by the concurrent execution threads and the sequential execution threads respectively, the instruction execution efficiency is improved. For a common application scenario, such as a multi-core CPU (Central processing Unit) with the number of connected clients being less than 20 and the use frequency being higher than 1Ghz, the network interaction module can complete a round of circulation within ns-level time, including sequential reading and analysis of all instructions of all screened readable clients, so that each client is guaranteed to obtain quick response and is not aware of the existence of time division multiplexing possibly occurring under partial conditions. The device carrying the server program can load all the drive modules of the controlled hardware in advance to ensure the normal call of the hardware drive interface function. The whole architecture adopts a single-process multithreading structural design, connection monitoring, instruction reading and instruction execution of multiple clients can be completed by only one process, the weight is light enough, and the ultrahigh load condition can be dealt with by copying the process and carrying out load balancing if a large number of instruction requests exceeding the single-process processing capacity exist.
In another technical scheme, the thread attributes of the client connection monitor thread, the client instruction analysis thread, the concurrent execution thread and the sequential execution thread are all set to PTHREAD_CANCEL_ASYNCHRONOUS, wherein when any thread runs in error, functions PTHREAD_cancel and PTHREAD_join are called to stop the running of all other threads and recover the thread resources. The thread attribute comprises a cancelable type attribute, the attribute is set as PTHREAD_CANCEL_ASYNCHRONOUS, the thread in the state can be withdrawn from operation at any time without being operated to a cancellation point, and the PTHREAD_cancel function can be called to asynchronously CANCEL other threads when the task is executed in error, so that the problem that the thread in a blocking state cannot actively release resources by itself to become a zombie thread is avoided.
The network interaction module shields SIGPIPE, SIGQUIT, SIGINT signals of the system in advance, and avoids that threads of the server side are interrupted in the execution process or are abnormally awakened in a blocking state due to abnormal closing of the client side or user misoperation, for example, IO operations such as read or recv are interrupted or function select is abnormally awakened when abnormal interrupt occurs.
In another technical scheme, a client instruction analysis thread reads a client instruction use function read, recv or recvfrom, and transmits the capacity of a buffer area as a read length parameter into the function, and writes the client socket into the use function write, send or sendto; if the return value of the calling read-write function is less than or equal to 0, skipping the reading and writing of the client and closing the socket, then finding and clearing the element structure information corresponding to the client from the global structure array, and then continuing to monitor and process the sockets of other clients. And using a read, recv or recvfrom function, wherein the input reading length parameter is the buffer capacity defined by the code, and when the readable instruction length in the socket is less than or equal to the buffer capacity, reading all readable contents at one time and returning an actual reading length value. If the return value of the calling read-write function is less than or equal to 0, the client socket where the user is located is closed by the user or IO operation is abnormally interrupted, and in order to ensure the normal operation of the system, the read-write of the client is skipped and the socket is closed.
The architecture also comprises a read-write protection mechanism for the client socket in the global structure array, which adopts file LOCKs corresponding to the socket one by one, and places read-write operation of the client socket in a critical area formed between a LOCK F (socket, F_LOCK, 0) and the LOCK F (socket, F_UNLOCK, 0), wherein the socket is the protected socket. The mutual exclusion of IO operations of the same client socket can be ensured by setting the file lock.
The reading or modifying operation of the elements of the global structure array is completed in a critical area formed by the semaphore or the mutual exclusion lock, so that a plurality of threads are prevented from accessing the global structure array at the same time.
In another technical scheme, the hardware control module further creates a semaphore b and a semaphore s with initial values of 0, wherein the semaphore b and the semaphore s are respectively used for indicating the instruction numbers in the current concurrent instruction queue and the sequential instruction queue, the semaphore b is subjected to one-time V operation after each user instruction analysis thread caches an analyzed instruction in the concurrent instruction queue, the semaphore s is subjected to one-time V operation after each user instruction analysis thread caches an analyzed instruction in the sequential instruction queue, the semaphore b is subjected to one-time P operation before each concurrent execution thread reads an instruction from the concurrent instruction queue, and the semaphore s is subjected to one-time P operation before each sequential execution thread reads an instruction from the sequential instruction queue. The P operation and the V operation are common operations on the semaphore, the value of the corresponding semaphore is reduced by 1 by one P operation, when the value of the semaphore is reduced to be smaller than 0, the thread enters a blocking state, the value of the corresponding semaphore is increased by 1 by one V operation, when the value of the semaphore is larger than or equal to 0, the thread continues to run, the value of the semaphore b and the semaphore s are used for indicating the number of instructions cached in the corresponding instruction queue, when the instructions in the instruction queue are completely executed, the value of the semaphore is also reduced to be smaller than 0 just due to the P operation, the instruction queue is kept empty before a new client instruction is received again by a server, at this time, the corresponding instruction execution thread enters the blocking state because the value of the semaphore is smaller than 0, after the client instruction is received again, the instruction queue is converted into a non-empty state, the value of the semaphore is not smaller than 0 due to the V operation, and the instruction execution thread can be normally awakened.
In another technical scheme, the network interaction module creates a semaphore sem_connect clt for indicating the number of connected clients, the initial value of the semaphore sem_connect clt is 0, V-operation is performed on the semaphore sem_connect clt after a new client connection request is monitored and connection is established, P-operation is performed on the semaphore sem_connect clt after an exit instruction of a connected client is received, the client socket is closed, element structure member information of the client in the global structure array is cleared, P-operation is performed on the semaphore sem_connect clt after all client instructions are read by a client instruction analysis thread, when the value of the semaphore sem_connect clt is smaller than 1, the client instruction analysis thread enters a blocking state, V-operation is performed on the semaphore sem_connect clt after the thread is awakened, and the situation that the thread cannot be blocked at a function call position due to unexpected damage of a non-client connection is avoided. When a new client is monitored, the V operation is carried out, the P operation is carried out when the client is closed, the client instruction reads the P operation in each cycle of the thread, so that the thread dormancy lets out time slices when no client is connected, CPU scheduling is saved, when the client is connected, the thread is awakened, the V operation is carried out to ensure that the semaphore sem_connectiClt value is always the same as the number of connected clients, and therefore the total number of connected clients can be obtained through the semaphore value when the connected clients are searched in the global structure array, and the searching speed is improved.
The present application also provides a system for blocking listening to multiple clients and controlling multiple hardware, as shown in fig. 2, the system comprising:
the user upper computer comprises a plurality of client devices with network communication functions and sends user instructions through a network;
the controlled hardware is used for providing a hardware driving interface and a driving module and is used for completing corresponding hardware operation;
the architecture design of the embedded master control network equipment adopts the server architecture for blocking and monitoring multiple clients and controlling multiple hardware, the embedded master control network equipment runs a server program, is connected with the controlled hardware and is loaded with a driving module, the server program is used for receiving and analyzing instructions sent by a user upper computer to obtain hardware control information, and a corresponding hardware driving interface and a driving module are called according to the hardware control information to enable the controlled hardware to complete corresponding hardware operation.
At least one client device is provided, which can be an interface application program respectively running on a plurality of different upper computers or a plurality of interface application programs running on the same upper computer; the communication system designed by the application can run in the embedded main control network equipment on which the Unix-like operating system is installed, receives a control instruction from a client through a network, and returns a feedback signal to the client; when the embedded main control equipment is powered on, all corresponding driving modules are loaded in advance in a self-starting script or manual insertion mode according to hardware equipment which is possibly controlled actually, the controlled hardware is controlled physically through buses such as IIC, SPI and the like, and corresponding hardware control is realized by opening a hardware driving file descriptor and carrying out corresponding system call on software.
Because the embedded master control network equipment adopts the server-side architecture for blocking and monitoring multiple clients and controlling multiple hardware, the system can realize the function that a plurality of user upper computers access and control controlled hardware through the embedded master control network equipment at the same time, and has the advantages of light weight, resource saving, quick response and the like.
The present application also provides a method for blocking and monitoring multiple clients and controlling multiple hardware, based on the server architecture for blocking and monitoring multiple clients and controlling multiple hardware provided by any one of the above schemes, as shown in fig. 3, the method includes the following steps:
s1: defining a global structure array for storing client information and blocking sockets of connected clients, and creating blocking monitoring sockets by a client connection monitoring thread; the client instruction analysis thread creates an empty file set fdSet of the fd_set type, and creates a nameless pipe, wherein the nameless pipe comprises a read port pipe [0] and a write port pipe [1];
s2: client connection monitoring thread call function accept blocking waiting for new client connection request; the client instruction analysis thread counts the socket and the pipe [0] in the global structure array to be added into the empty file set fdSet together, and then a function select blocking waiting file set fdSet is called to update readable files; the concurrent execution thread and the sequential execution thread block the waiting concurrent instruction queue and the sequential instruction queue respectively to store instructions;
S3: when a connection request of a new client is received, a client connection monitoring thread is awakened, a socket of the new client returned by an accept function and client information are updated into a global structure array, and any character is written into a write port pipe [1] of the anonymous pipe to enable a read port pipe [0] of the anonymous pipe in a file set fdSet to be in a readable state; when receiving an instruction sent by a connected client through a socket, the corresponding socket becomes a readable state;
s4: the client instruction analysis thread is awakened, a function select screens from a file set fdSet and only reserves readable files therein, confirms whether a ring pipeline read port pipe [0] is still in the screened readable file set fdSet, clears the ring pipeline read port pipe [0] and returns to the step S2 if yes, otherwise, confirms whether each client socket in the global structure array is in the readable file set fdSet one by one, reads and analyzes the instruction content of the socket still in the readable file set fdSet, caches the instruction which does not relate to critical resources into a concurrent instruction queue, wakes up a concurrent execution thread to read and execute the instruction in the concurrent instruction queue, caches the instruction which relates to the critical resources into a sequential instruction queue, and wakes up the sequential execution thread to read and execute the instruction in the sequential instruction queue; returning to step S2 when all instructions have been buffered in the instruction queue.
The following is a code example provided in the present application that blocks a client connection snoop thread in a server architecture that listens to multiple clients and controls multiple hardware:
the thread firstly creates a blocking type monitoring socket listenFd in an initial_listen_socket () function, then calls an initial_client_fds () to initialize a global structure array gszClient, and then circularly calls an accept () to monitor connected clients and generate a new communication socket; explaining several places circled in the code in detail, the first place from top to bottom is to set the thread attribute as PTHREAD_CANCEL_ASYNCHOOUS, so that the thread can be called by other threads to CANCEL the PTHREAD_cancelation asynchronously even if blocked on accept () or mutual exclusive lock; the second place is that when a new client is monitored, any character or character string needs to be written into a ring pipeline write port pListen- > szPipeFd [1] for waking up a client instruction analysis thread sleeping on a function select because an old client does not send a new instruction so as to update a new client socket into fdSet; and thirdly, performing V operation on the semaphore semConnectClt when a new client is connected, ensuring that the semaphore is equal to the number of connected clients, and performing P operation and then V operation on the semaphore by a client instruction analysis thread, namely, when no client is connected, the client instruction analysis thread is in a blocking state, so that CPU scheduling is saved.
The following is one code example provided herein that blocks the main portion of the guest instruction resolution thread in a server architecture that listens to multiple guests and controls multiple hardware:
explaining several circled places in the code in detail, performing P operation and then V operation on the semaphore sem ConnectClt from top to bottom at the first place, and ensuring that the thread is in a dormant state when the sem ConnectClt number is equal to the number of connected clients and no clients are connected so as to save CPU scheduling; firstly obtaining signal magnitude values to obtain the number of connected clients, then calling a static_ConnectcedClient function to count the connected clients which are marked and stored in gszClient in a monitoring thread, adding all connected client communication sockets and a reading port szPipe [0] of a nameless pipeline into a file set comFdSet to monitor, calling a function select to screen the readable sockets, and writing data to the reading port of the nameless pipeline once the monitoring thread has new client connection so as to wake up the thread sleeping on the function select, thereby updating the connected clients monitored by the function select in real time; thirdly, the reading port is emptied in time so as to be convenient for the next correct awakening; and the fourth step is to circularly call the search_readableClient, sequentially return the structure body addresses of the readable client arrays in the global structure array gszClient, and sequentially perform subsequent reading analysis.
It should be noted that, although the steps are described above in a specific order, it is not meant to necessarily be performed in the specific order, and in fact, some of the steps may be performed concurrently or even in a changed order, as long as the required functions are achieved. The number of devices and the scale of processing described herein are intended to simplify the description of the invention, and applications, modifications and variations of the invention will be apparent to those skilled in the art.
Although embodiments of the present invention have been disclosed above, it is not limited to the details and embodiments shown and described, it is well suited to various fields of use for which the invention would be readily apparent to those skilled in the art, and accordingly, the invention is not limited to the specific details and illustrations shown and described herein, without departing from the general concepts defined in the claims and their equivalents.
Claims (10)
1. The server architecture for blocking and monitoring multiple clients and controlling multiple hardware is characterized by comprising a network interaction module and a hardware control module, wherein:
the network interaction module comprises:
the global structure array comprises element structure members including a socket of a connected client and client information, wherein the socket is a blocking socket;
A nameless pipe including a read port and a write port;
a client connection monitoring thread which creates a blocking monitoring socket, then calls an accept function to block a connection request of a new client, blocks waiting when no new client is connected, wakes up and returns a socket of the new client when the new client requests connection, stores the socket and client information of the connected client into a global structure array, writes any data into a writing port of the unknown pipeline to enable the writing port to be converted into a readable state, and then calls the accept function again to continue blocking monitoring to wait for connection requests of other clients;
the client instruction analysis thread defines an empty file set fdSet of fd_set type, sequentially adds the read port of the nameless pipeline and all sockets in the global structure array into the file set fdSet, then calls a function select to monitor whether files in the file set fdSet are updated to a readable state, and when a client connection request or a client instruction is not received, the client instruction analysis thread is blocked in the function select calling, and wakes up and executes operations when the following conditions occur:
the system is awakened due to the occurrence of the abnormal condition of the system, and corresponding abnormal processing is carried out according to the abnormal return value and the actual abnormal condition;
The method comprises the steps that a ring-name pipeline reading port or a client socket is awakened in a readable state, a function select screens files in a file set fdSet and only stores readable files therein, whether the ring-name pipeline reading port is still in the file set fdSet is firstly confirmed, if yes, the ring-name pipeline reading port is emptied to restore the ring-name pipeline reading port to an unreadable state, otherwise, whether all client sockets in a global structure array are still in the file set fdSet is confirmed one by one, the instruction content received by the socket still in the file set fdSet is read and analyzed, then the file is cached, and the result of analyzing the instruction comprises controlled equipment and a control mode designated by the instruction; then clearing the file set fdSet, adding all sockets in the read port and the global structure array of the nameless pipeline into the file set fdSet again, and calling the function select again to monitor the file set fdSet for the next round to wait for a new readable file to appear in the file set fdSet;
the hardware control module includes:
the concurrent instruction queue and the sequential instruction queue are used for classifying and caching the analyzed instructions, if the instructions relate to access to critical resources, the instructions are cached to the sequential instruction queue, and otherwise, the instructions are cached to the concurrent instruction queue;
The concurrent execution thread reads the instructions in the concurrent instruction queue in a first-in first-out mode and concurrently controls the controlled hardware by calling the corresponding controlled hardware driving interface;
a sequential execution thread which reads the instructions in the sequential instruction queue in a first-in first-out mode, sequentially controls the controlled hardware by calling a corresponding controlled hardware driving interface, and starts each instruction in the sequential instruction queue after the execution of the last instruction is finished;
and each thread of the client connection monitoring thread, the client instruction analysis thread, the concurrent execution thread and the sequential execution thread belongs to the same process and asynchronously and concurrently operates.
2. The multi-client blocking and listening and multi-hardware controlling server architecture of claim 1 wherein the thread attributes of the client connection listening thread, the client instruction parsing thread, the concurrent execution thread, and the sequential execution thread are all set to PTHREAD_CANCEL_ASYNCCHRONOUS, wherein any thread invokes the functions pthread_cancel and pthread_join to halt operation of all other threads and reclaim thread resources when the thread is in error.
3. The server architecture for blocking listening to multiple clients and controlling multiple hardware of claim 1 wherein the network interaction module masks SIGPIPE, SIGQUIT, SIGINT signals of the system in advance.
4. The architecture of claim 1, wherein the client instruction resolution thread reads a client instruction use function read, recv, or recvfrom and passes buffer capacity as a read length parameter into the function, and writes a client socket use function write, send, or sendto; if the return value of the calling read-write function is less than or equal to 0, skipping the reading and writing of the client and closing the socket, then finding and clearing the element structure information corresponding to the client from the global structure array, and then continuing to monitor and process the sockets of other clients.
5. The server architecture for blocking and listening to multiple clients and controlling multiple hardware according to claim 1, comprising a read-write protection mechanism for client sockets in the global structure array, which uses file LOCKs corresponding to each socket one by one, and places read-write operations of the client sockets in a critical section formed between lockf (socket, f_lock, 0) and lockf (socket, f_unlock, 0), wherein socket is a protected socket.
6. The system of claim 1, wherein the network interaction module creates a semaphore sem_connect_clt for indicating the number of connected clients, wherein the semaphore sem_connect_clt is initially 0, wherein the semaphore sem_connect_clt is V-operated once after a new client connection request is heard and a connection is established, wherein the semaphore sem_connect_clt is P-operated once after an exit instruction of a connected client is received, wherein the client instruction parse thread performs P-operation on the semaphore sem_connect_clt once after all client instructions are read, wherein the client instruction parse thread enters a blocking state when the value of the semaphore sem_connect_clt is less than 0, wherein the client instruction parse thread performs V-operation on the semaphore sem_connect_clt after the thread is awakened, and wherein the element structure member information of the client in the global structure array is cleared.
7. The server architecture of claim 1, wherein the operations of reading or modifying elements of the global structure array are performed in critical sections formed by semaphores or mutex locks.
8. The architecture of claim 1, wherein the hardware control module further creates a semaphore b and a semaphore s with initial values of 0, the semaphore b and the semaphore s being used for indicating the number of instructions in the current concurrent instruction queue and the sequential instruction queue, respectively, the client instruction parsing thread performs a V operation on the semaphore b after buffering one parsed instruction in the concurrent instruction queue, performs a V operation on the semaphore s after buffering one parsed instruction in the sequential instruction queue, and performs a P operation on the semaphore b before reading one instruction in the concurrent instruction queue by the concurrent execution thread, and performs a P operation on the semaphore s before reading one instruction in the sequential instruction queue by the sequential execution thread.
9. A system for blocking listening to multiple clients and controlling multiple hardware, comprising:
the user upper computer comprises a plurality of client devices with network communication functions and sends user instructions through a network;
The controlled hardware is used for providing a hardware driving interface and a driving module and is used for completing corresponding hardware operation;
the architecture design of the embedded master control network device adopts the server architecture for blocking and monitoring multiple clients and controlling multiple hardware according to any one of claims 1-8, the embedded master control network device runs a server program, is connected with the controlled hardware and is loaded with the driving module, and the server program is used for receiving and analyzing the instruction sent by the user upper computer to obtain hardware control information, and calling a corresponding hardware driving interface and the driving module according to the hardware control information to enable the controlled hardware to complete corresponding hardware operation.
10. A method of blocking listening to multiple clients and controlling multiple hardware, characterized in that based on a server-side architecture of blocking listening to multiple clients and controlling multiple hardware according to any of claims 1-8, the method comprises the steps of:
s1: defining a global structure array for storing client information and blocking sockets of connected clients, and creating blocking monitoring sockets by a client connection monitoring thread; the client instruction analysis thread creates an empty file set fdSet of the fd_set type, and creates a nameless pipe, wherein the nameless pipe comprises a read port pipe [0] and a write port pipe [1];
S2: client connection monitoring thread call function accept blocking waiting for new client connection request; the client instruction analysis thread counts the socket and the pipe [0] in the global structure array to be added into the empty file set fdSet together, and then a function select blocking waiting file set fdSet is called to update readable files; the concurrent execution thread and the sequential execution thread block the waiting concurrent instruction queue and the sequential instruction queue respectively to store instructions;
s3: when a connection request of a new client is received, a client connection monitoring thread is awakened, a socket of the new client returned by an accept function and client information are updated into a global structure array, and any character is written into a write port pipe [1] of the anonymous pipe to enable a read port pipe [0] of the anonymous pipe in a file set fdSet to be in a readable state; when receiving an instruction sent by a connected client through a socket, the corresponding socket becomes a readable state;
s4: the client instruction analysis thread is awakened, a function select screens from a file set fdSet and only reserves readable files therein, confirms whether a ring pipeline read port pipe [0] is still in the screened readable file set fdSet, clears the ring pipeline read port pipe [0] and returns to the step S2 if yes, otherwise, confirms whether each client socket in the global structure array is in the readable file set fdSet one by one, reads and analyzes the instruction content of the socket still in the readable set fdSet, caches the instructions which do not relate to critical resources into a concurrent instruction queue, wakes up a concurrent execution thread to read and execute the instructions in the concurrent instruction queue, caches the instructions which relate to the critical resources into a sequential instruction queue, and wakes up the sequential execution thread to read and execute the instructions in the sequential instruction queue; returning to step S2 when all instructions have been buffered in the instruction queue.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311444873.7A CN117176811B (en) | 2023-11-02 | 2023-11-02 | Server architecture, system and method for blocking and monitoring multiple clients and controlling multiple hardware |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311444873.7A CN117176811B (en) | 2023-11-02 | 2023-11-02 | Server architecture, system and method for blocking and monitoring multiple clients and controlling multiple hardware |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN117176811A CN117176811A (en) | 2023-12-05 |
| CN117176811B true CN117176811B (en) | 2024-01-26 |
Family
ID=88930090
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202311444873.7A Active CN117176811B (en) | 2023-11-02 | 2023-11-02 | Server architecture, system and method for blocking and monitoring multiple clients and controlling multiple hardware |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN117176811B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118426900B (en) * | 2024-07-04 | 2025-01-24 | 国电南京自动化股份有限公司 | SCADA screen cache update method based on OpenCL |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6901596B1 (en) * | 1998-05-07 | 2005-05-31 | Hewlett-Packard Development Company, L.P. | Method of communicating asynchronous events to remote procedure call clients |
| CN112380028A (en) * | 2020-10-26 | 2021-02-19 | 上汽通用五菱汽车股份有限公司 | Asynchronous non-blocking response type message processing method |
| CN112559050A (en) * | 2019-09-25 | 2021-03-26 | 北京国双科技有限公司 | Method and device for processing concurrency number of client asynchronous request information |
| CN113553199A (en) * | 2021-07-14 | 2021-10-26 | 浙江亿邦通信科技有限公司 | Method and device for processing multi-client access by using asynchronous non-blocking mode |
| CN115757398A (en) * | 2022-10-31 | 2023-03-07 | 国汽智图(北京)科技有限公司 | Data storage method and device, computer equipment and storage medium |
| CN116089037A (en) * | 2023-01-03 | 2023-05-09 | 上海中通吉网络技术有限公司 | Method and system for implementing asynchronous task processing |
| CN116737395A (en) * | 2023-08-14 | 2023-09-12 | 北京海科融通支付服务有限公司 | Asynchronous information processing system and method |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9262235B2 (en) * | 2011-04-07 | 2016-02-16 | Microsoft Technology Licensing, Llc | Messaging interruptible blocking wait with serialization |
-
2023
- 2023-11-02 CN CN202311444873.7A patent/CN117176811B/en active Active
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6901596B1 (en) * | 1998-05-07 | 2005-05-31 | Hewlett-Packard Development Company, L.P. | Method of communicating asynchronous events to remote procedure call clients |
| CN112559050A (en) * | 2019-09-25 | 2021-03-26 | 北京国双科技有限公司 | Method and device for processing concurrency number of client asynchronous request information |
| CN112380028A (en) * | 2020-10-26 | 2021-02-19 | 上汽通用五菱汽车股份有限公司 | Asynchronous non-blocking response type message processing method |
| CN113553199A (en) * | 2021-07-14 | 2021-10-26 | 浙江亿邦通信科技有限公司 | Method and device for processing multi-client access by using asynchronous non-blocking mode |
| CN115757398A (en) * | 2022-10-31 | 2023-03-07 | 国汽智图(北京)科技有限公司 | Data storage method and device, computer equipment and storage medium |
| CN116089037A (en) * | 2023-01-03 | 2023-05-09 | 上海中通吉网络技术有限公司 | Method and system for implementing asynchronous task processing |
| CN116737395A (en) * | 2023-08-14 | 2023-09-12 | 北京海科融通支付服务有限公司 | Asynchronous information processing system and method |
Non-Patent Citations (2)
| Title |
|---|
| 基于异步多线程机制的实时通信研究;王华伟;铁路通信信号工程技术;第14卷(第3期);全文 * |
| 基于异步非阻塞框架的电力物联网通信技术;吴振冲 等;电力信息与通信技术;第21卷(第10期);第1-9页 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN117176811A (en) | 2023-12-05 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| KR102011949B1 (en) | System and method for providing and managing message queues for multinode applications in a middleware machine environment | |
| JP2758311B2 (en) | Log file control method in complex system | |
| US7971029B2 (en) | Barrier synchronization method, device, and multi-core processor | |
| US8868492B2 (en) | Method for maximizing throughput and minimizing transactions response times on the primary system in the presence of a zero data loss standby replica | |
| US7523344B2 (en) | Method and apparatus for facilitating process migration | |
| US6823518B1 (en) | Threading and communication architecture for a graphical user interface | |
| US9230002B2 (en) | High performant information sharing and replication for single-publisher and multiple-subscriber configuration | |
| US9904721B1 (en) | Source-side merging of distributed transactions prior to replication | |
| US20130160028A1 (en) | Method and apparatus for low latency communication and synchronization for multi-thread applications | |
| US7809690B2 (en) | Performance metric-based selection of one or more database server instances to perform database recovery | |
| US20080263106A1 (en) | Database queuing and distributed computing | |
| CN110113420A (en) | Distributed Message Queue management system based on NVM | |
| JPH07191944A (en) | System and method for prevention of deadlock in instruction to many resources by multiporcessor | |
| JP2003131900A (en) | Server system operation management method | |
| US20120297216A1 (en) | Dynamically selecting active polling or timed waits | |
| WO2023046141A1 (en) | Acceleration framework and acceleration method for database network load performance, and device | |
| JP2001265611A (en) | Computer system, memory management method, storage medium and program transmitter | |
| CN117176811B (en) | Server architecture, system and method for blocking and monitoring multiple clients and controlling multiple hardware | |
| Marcelino et al. | Goldfish: Serverless actors with short-term memory state for the edge-cloud continuum | |
| WO2024109068A1 (en) | Program monitoring method and apparatus, and electronic device and storage medium | |
| EP1214652A2 (en) | Efficient event waiting | |
| US12327132B2 (en) | Request processing methods and apparatuses, computing device and storage medium | |
| US12159032B2 (en) | Increasing OLTP throughput by improving the performance of logging using persistent memory storage | |
| US12086132B2 (en) | Increasing OLTP throughput by improving the performance of logging using persistent memory storage | |
| US20240045613A1 (en) | Increasing oltp throughput by improving the performance of logging using persistent memory storage |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |