CN108200118A - A kind of solution based on the request of movement simulation platform high concurrent - Google Patents
A kind of solution based on the request of movement simulation platform high concurrent Download PDFInfo
- Publication number
- CN108200118A CN108200118A CN201711260988.5A CN201711260988A CN108200118A CN 108200118 A CN108200118 A CN 108200118A CN 201711260988 A CN201711260988 A CN 201711260988A CN 108200118 A CN108200118 A CN 108200118A
- Authority
- CN
- China
- Prior art keywords
- thread
- server
- request
- idle
- connection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/63—Routing a service request depending on the request content or context
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1008—Server selection for load balancing based on parameters of servers, e.g. available memory or workload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1029—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
本发明公开了一种基于运动仿真平台高并发请求的解决方法,该方法包括:配置服务器群,接收用户请求;进行仿真前处理,启用多线程处理用户请求;进行服装热湿传输计算和人体热调节计算;反馈仿真结果,将仿真记录备份到数据库中。在本发明实施例中,通过配置多台服务器处理请求,合理地分派任务,平衡各服务器之间的负担,避免大量的请求堆积。结合调度算法来平衡各服务器的运算压力,从而降低服务器的平均响应时间,使得每台服务器都可以保持较高的运算效率。同时,采用多线程模型,能够并行执行多个仿真计算任务,减少了用户的阻塞等待时间,有效提高运动仿真平台在高并发请求下的处理能力和效率。
The invention discloses a method for solving high concurrent requests based on a motion simulation platform. The method includes: configuring a server group to receive user requests; performing pre-simulation processing and enabling multi-thread processing of user requests; performing clothing heat and humidity transmission calculation and human body heat transfer calculation. Adjust calculations; feedback simulation results, and back up simulation records to the database. In the embodiment of the present invention, by configuring multiple servers to process requests, tasks are allocated reasonably, loads among servers are balanced, and a large number of request accumulations are avoided. Combined with the scheduling algorithm to balance the computing pressure of each server, thereby reducing the average response time of the server, so that each server can maintain a high computing efficiency. At the same time, the multi-threaded model can execute multiple simulation computing tasks in parallel, reducing the user's blocking waiting time, and effectively improving the processing capacity and efficiency of the motion simulation platform under high concurrent requests.
Description
技术领域technical field
本发明涉及计算机服务器技术领域,尤其涉及一种基于运动仿真平台高并发请求的解决方法。The invention relates to the technical field of computer servers, in particular to a solution to high concurrent requests based on a motion simulation platform.
背景技术Background technique
现代社会中,人们对身体健康状况越发重视,运动正成为生活中不可缺少的一环。合理的运动对于促进身体健康有重大作用,但也有不少人由于缺少对自身状况和运动剧烈程度的了解,反而在运动过程中产生了身体不适的情况,例如近年频繁见诸于报端的学生长跑晕厥事件。近年来,一些团队研究出了一种人体热湿运动仿真算法,可以根据用户自定义的身体特征、运动计划、外界环境等参数,使用服装热湿传输模型和人体热调节模型进行仿真计算,预测出人体在运动过程中的温度、出汗率、舒适感等变化。这种运动仿真算法有广阔的应用前景,已经有一些基于该算法的应用程序被开发出来,用于提前告知用户在运动过程中可能出现的身体不适,帮助用户更好地制定运动计划。In modern society, people pay more and more attention to their physical health, and exercise is becoming an indispensable part of life. Reasonable exercise plays an important role in promoting physical health, but many people suffer from physical discomfort during exercise due to lack of understanding of their own conditions and the intensity of exercise, such as the long-distance running of students that has been frequently reported in newspapers Syncope event. In recent years, some teams have developed a human heat and humidity motion simulation algorithm, which can use the clothing heat and moisture transfer model and the human body heat regulation model to perform simulation calculations based on user-defined parameters such as body characteristics, exercise plans, and external environment, and predict The temperature, sweat rate, and comfort of the human body during exercise are recorded. This kind of motion simulation algorithm has broad application prospects. Some applications based on this algorithm have been developed to inform users of possible physical discomfort during exercise in advance and help users make better exercise plans.
目前大部分基于人体热湿运动仿真算法的应用程序都是C-S(Client-Server,客户端-服务端)架构的,整个仿真平台由客户端和服务端两部分组成。用户在客户端输入参数,包装成用户请求传递给服务端,服务端利用人体热湿运动仿真算法对参数进行处理,计算出相应的结果并返回给客户端展示。服务端处理用户请求的方式是线性模式,即接收一个用户请求,处理一个用户请求,返回结果,然后才接收下一个请求,也就是说不可能同时处理多个用户请求。At present, most of the applications based on human heat and humidity motion simulation algorithm are based on C-S (Client-Server, client-server) architecture, and the entire simulation platform consists of two parts: client and server. The user inputs parameters on the client side, packs them into user requests and sends them to the server side, and the server side uses the human body heat and humidity motion simulation algorithm to process the parameters, calculates the corresponding results and returns them to the client side for display. The server handles user requests in a linear mode, that is, it receives a user request, processes a user request, returns the result, and then receives the next request, which means that it is impossible to process multiple user requests at the same time.
现有的运动健康仿真平台研究大多集中在算法的改进上,缺乏对大规模投入使用后用户量激增情况的考虑。在用户请求量较少的情况下,服务端采用线性模式处理请求没有太大问题。可一旦请求量增大,由于仿真算法的执行需要一定的时间,以及网络稳定性的因素,会造成大量请求在服务端堆积。此时,后到达的用户请求等待处理的排队时间会变长,用户等待结果返回的平均时间变长,造成不佳体验。Most of the existing research on sports and health simulation platforms focuses on the improvement of algorithms, and lacks consideration of the surge in the number of users after large-scale use. In the case of a small number of user requests, there is no big problem with the server processing requests in a linear mode. However, once the number of requests increases, due to the time required for the execution of the simulation algorithm and the factors of network stability, a large number of requests will accumulate on the server. At this time, the queuing time for the late arrival user request to be processed will be longer, and the average time for the user to wait for the result to be returned will be longer, resulting in a bad experience.
发明内容Contents of the invention
本发明的目的在于克服现有技术的不足,现有运动仿真平台实现方案的主要缺点是缺乏应对高并发请求的能力,在面对大规模用户请求时,工作效率低下,响应时间较长。为解决这个问题,可以从算法和服务器性能两方面着手。提高算法的运行效率固然是有效的方法,但其研究所需周期较长,且算法目前已渐趋成熟,提升空间有限。因此,从平台服务器着手,提高服务器应对高并发用户请求的处理能力,对于提升运动健康仿真平台在实际应用中的表现有重要意义。本发明提供了一种基于运动仿真平台高并发请求的解决方法,提升仿真平台服务器的工作效率,减少大规模用户请求下的服务响应延迟,使仿真平台用户获得更好的体验。The purpose of the present invention is to overcome the deficiencies of the prior art. The main disadvantage of the existing motion simulation platform implementation scheme is the lack of ability to deal with high concurrent requests. When facing large-scale user requests, the work efficiency is low and the response time is long. To solve this problem, we can start from two aspects of algorithm and server performance. Improving the operating efficiency of the algorithm is certainly an effective method, but the research period is long, and the algorithm has gradually matured, so there is limited room for improvement. Therefore, starting from the platform server, improving the processing capacity of the server to deal with high concurrent user requests is of great significance for improving the performance of the sports health simulation platform in practical applications. The invention provides a solution based on the high concurrent request of the motion simulation platform, improves the working efficiency of the simulation platform server, reduces the service response delay under large-scale user requests, and enables the simulation platform users to obtain better experience.
为了解决上述问题,本发明提出了一种基于运动仿真平台高并发请求的解决方法,所述方法包括:In order to solve the above problems, the present invention proposes a solution based on high concurrent request of motion simulation platform, said method comprising:
配置服务器群,接收用户请求;Configure the server group and receive user requests;
进行仿真前处理,启用多线程处理用户请求;Perform pre-simulation processing and enable multithreading to process user requests;
进行服装热湿传输计算和人体热调节计算;Carry out clothing heat and moisture transfer calculations and human body heat regulation calculations;
反馈仿真结果,将仿真记录备份到数据库中。Feedback the simulation results and back up the simulation records to the database.
优选地,所述配置服务器群,接收用户请求的具体步骤包括:Preferably, the configuration server group, the specific steps of receiving user requests include:
配置服务器集群,包括一台调度服务器、一台数据库服务器和多台内部服务器。调度服务器的ip地址对外公开,数据库服务器和所有内部服务器的ip地址都不公开;Configure server clusters, including a scheduling server, a database server, and multiple internal servers. The IP address of the scheduling server is public, and the IP addresses of the database server and all internal servers are not public;
调度服务器与每台内部服务器之间均使用TCP连接池维持通信;A TCP connection pool is used to maintain communication between the scheduling server and each internal server;
调度服务器接收所有用户的请求,但不进行实际处理。调度服务器与用户端之间采用长连接。调度服务器使用最少连接策略把用户请求转发给内部服务器处理。The dispatch server receives all user requests, but does not actually process them. A long connection is used between the scheduling server and the client. The scheduling server forwards user requests to the internal server for processing using the least connection strategy.
优选地,所述进行仿真前处理,启用多线程处理用户请求具体包括:Preferably, said performing pre-simulation processing and enabling multi-threaded processing of user requests specifically includes:
内部服务器监听特定端口,多路分离线程监测来自调度服务器转发的用户请求,交给网络IO线程读取并解析用户数据。解析用户数据的过程不阻塞监听端口;The internal server listens to a specific port, and the multi-channel separation thread monitors the user request forwarded from the scheduling server, and hands it to the network IO thread to read and analyze user data. The process of parsing user data does not block the listening port;
网络IO线程从用户请求数据包中获取仿真计算所需的参数,包括用户ID、身体特征参数、服装设定参数、运动计划参数和外界环境参数The network IO thread obtains the parameters required for simulation calculation from the user request packet, including user ID, body characteristic parameters, clothing setting parameters, motion plan parameters and external environment parameters
网络IO线程解析完用户输入参数后,检测仿真计算线程池中是否有空闲线程,有的话选择一个空闲线程,向其传入仿真计算参数,进行服装热湿传输计算和人体热调节计算;否则将用户请求加入仿真任务队列。After the network IO thread parses the user input parameters, it detects whether there is an idle thread in the simulation calculation thread pool, and if so, selects an idle thread and passes simulation calculation parameters to it to perform clothing heat and humidity transmission calculation and human body heat regulation calculation; otherwise Add user requests to the simulation task queue.
优选地,所述反馈仿真结果具体包括:Preferably, the feedback simulation results specifically include:
仿真计算线程执行完毕后,通知多路分离线程。多路分离线程监听到事件发生,获取结果数据并调用网络IO线程处理。结果数据包括各时间节点各人体部位的温度值和出汗率。After the simulation calculation thread is executed, the demultiplexing thread is notified. The demultiplexing thread listens to the occurrence of the event, obtains the result data and calls the network IO thread for processing. The result data includes the temperature value and sweat rate of each body part at each time node.
网络IO线程利用TCP连接池将结果数据发送回调度服务器。The network IO thread uses the TCP connection pool to send the result data back to the scheduling server.
调度服务器接收到结果数据后,为了不暴露内部服务器的真实ip地址,将数据包的源地址改为调度服务器的ip地址,再发送给用户。After the scheduling server receives the result data, in order not to expose the real ip address of the internal server, the source address of the data packet is changed to the ip address of the scheduling server, and then sent to the user.
优选地,所述将仿真记录备份到数据库中具体包括:Preferably, said backing up the simulation record to the database specifically includes:
网络IO线程将仿真记录发送给数据库服务器。仿真记录包括用户ID、用户输入参数和所述结果数据。The network IO thread sends the simulation records to the database server. A simulation record includes a user ID, user input parameters and said result data.
采用键值对形式保存数据,以用户ID作为键。数据库服务器判断数据库中是否已存在键与当前用户ID相同的条目,若已存在,直接向该条目的列表中加入新的仿真记录;若不存在,先以该用户ID为键创建新条目,再向列表字段中加入仿真记录。Save data in the form of key-value pairs, with the user ID as the key. The database server judges whether an entry with the same key as the current user ID already exists in the database. If it exists, it will directly add a new simulation record to the list of the entry; if it does not exist, first create a new entry with the user ID as the key, and then Add a simulation record to the list field.
优选地,所述的TCP连接池具体实现方法包括:Preferably, the specific implementation method of the TCP connection pool includes:
调度服务器与每台内部服务器之间均先提前创建若干个TCP连接。其中每个TCP连接的源端口为调度服务器的本地端口,且互不重复;目标端口为内部服务器的指定端口,即仿真主程序监听的端口。设定连接池的最少连接数min,确保与每台内部服务器之间的TCP连接在每一时刻都至少有min个。设定连接池的最大连接数max,确保与每台内部服务器之间的TCP连接不超过max个。设定连接的空闲等待时间idle,当一个TCP连接连续处于空闲状态的时间超过idle,且当前TCP连接数大于min时,关闭该连接。当调度服务器需要向内部服务器发送数据时,先检查是否有空闲的TCP连接,如果有,选择一个空闲的连接发送数据。如果没有空闲的连接,则需判断调度服务器与该内部服务器之间的已有连接数是否达到了max,如果没达到,则新建一个TCP连接发送数据;如果已达到max,则将数据加入等待队列,等到有空闲连接时再处理。每个TCP连接在发送完数据后都不会立即关闭。Several TCP connections are created in advance between the scheduling server and each internal server. The source port of each TCP connection is the local port of the dispatching server and is not repeated; the destination port is the designated port of the internal server, that is, the port monitored by the simulation main program. Set the minimum connection number min of the connection pool to ensure that there are at least min TCP connections with each internal server at each moment. Set the maximum connection number max of the connection pool to ensure that the number of TCP connections with each internal server does not exceed max. Set the idle waiting time of the connection to idle. When a TCP connection is idle for more than idle and the number of current TCP connections is greater than min, the connection will be closed. When the scheduling server needs to send data to the internal server, it first checks whether there is an idle TCP connection, and if so, selects an idle connection to send data. If there is no idle connection, it is necessary to judge whether the number of existing connections between the scheduling server and the internal server has reached max, if not, create a new TCP connection to send data; if it has reached max, add the data to the waiting queue , wait until there is an idle connection before processing. Each TCP connection is not closed immediately after sending data.
优选地,所述长连接具体实现方法包括:Preferably, the specific implementation method of the long connection includes:
服务器在接收完用户数据后,不会立即与用户端断开连接,而是启动一个计时器,在计时结束前都会保持连接。如果计时过程中,服务器再次接收到用户请求,则重新开始计时。如果直到计时结束,用户端都没有再向服务器发送请求,那么计时结束后关闭连接。After the server receives user data, it will not disconnect from the client immediately, but will start a timer, and the connection will be kept until the timer expires. If the server receives a user request again during the timing, it restarts timing. If the client does not send a request to the server until the timer expires, the connection is closed after the timer expires.
优选地,所述最少连接策略具体实现方法如下:Preferably, the specific implementation method of the least connection strategy is as follows:
调度服务器需记录它向每台内部服务器转发的活跃请求数量。初始时,调度服务器与所有内部服务器之间的活跃请求数均为0。当一个请求被转发到某台内部服务器处理,或请求被加入到该内部服务器的TCP连接池的等待队列时,调度服务器与该内部服务器之间的活跃请求数加一;当一个内部服务器的请求处理结束或连接因超时中止时,调度服务器与该内部服务器之间的活跃请求数减一。每次接收到请求后,调度服务器会优先选择活跃请求数最少的内部服务器转发。当有若干个内部服务器的活跃请求数相同时,则需要查看调度服务器与这些内部服务器的TCP连接池中的空闲连接个数,优先选择空闲连接个数最多的内部服务器转发。如果仍有若干个内部服务器的空闲连接个数并列最多,则在这些服务器中随机选择一个转发。The dispatch server keeps track of the number of active requests it forwards to each internal server. Initially, the number of active requests between the dispatch server and all internal servers is 0. When a request is forwarded to an internal server for processing, or the request is added to the waiting queue of the internal server's TCP connection pool, the number of active requests between the scheduling server and the internal server is increased by one; when an internal server's request When processing ends or the connection is aborted due to a timeout, the number of active requests between the dispatch server and this internal server is decremented by one. Each time a request is received, the scheduling server will give priority to the internal server with the least number of active requests to forward. When several internal servers have the same number of active requests, it is necessary to check the number of idle connections in the TCP connection pool between the scheduling server and these internal servers, and preferentially select the internal server with the largest number of idle connections for forwarding. If there are still several internal servers with the highest number of idle connections, one of these servers is randomly selected for forwarding.
优选地,所述的多路分离线程具体实现功能包括:Preferably, the specific implementation functions of the demultiplexed threads include:
多路分离线程的主要职责是监听系统中发生的事件,并将事件转发给相应的函数处理。它的内部维护一个监听列表。每当内部服务器与调度服务器之间的TCP连接池有新连接建立时,将该连接的socket加入到监听列表中;每当TCP连接池中有连接关闭时,从监听列表中删除该连接的socket。监听列表中的对象还包括所有仿真计算的工作线程。多路分离线程调用操作系统的select、epoll等方法循环等待事件发生,关注的事件包括两种:用户数据到来、仿真计算完毕。每当有事件发生,多路分离线程就能从操作系统得到通知,并通过监听列表获取到该事件对应的socket或者线程列表,将事件分派给相应的函数进行处理,然后继续等待下一个事件。The main responsibility of the demultiplexed thread is to monitor the events that occur in the system, and forward the events to the corresponding functions for processing. It maintains a listener list internally. Whenever a new connection is established in the TCP connection pool between the internal server and the scheduling server, the socket of the connection is added to the listening list; whenever a connection in the TCP connection pool is closed, the socket of the connection is deleted from the listening list . The objects in the listener list also include all worker threads for simulation calculations. The multi-channel separation thread calls the select, epoll and other methods of the operating system to wait for events to occur in a loop. The events of concern include two types: the arrival of user data and the completion of simulation calculations. Whenever an event occurs, the demultiplexed thread can be notified from the operating system, and obtain the socket or thread list corresponding to the event through the listening list, dispatch the event to the corresponding function for processing, and then continue to wait for the next event.
优选地,所述的网络IO线程具体实现功能包括:Preferably, the specific implementation functions of the network IO thread include:
网络IO线程负责从用户请求数据包中提取出仿真计算所需的参数,以及在仿真计算完毕后,将结果数据发送会调度服务器。网络IO线程和多路分离线程可以并发执行,所以解析用户数据的过程不会阻塞监听端口。The network IO thread is responsible for extracting the parameters required for the simulation calculation from the user request data packet, and sending the result data to the scheduling server after the simulation calculation is completed. The network IO thread and the demultiplexing thread can be executed concurrently, so the process of parsing user data will not block the listening port.
优选地,所述的仿真计算线程池具体实现方法包括:Preferably, the specific implementation method of the simulation computing thread pool includes:
在内部服务器中提前创建一定数量的线程,专门用于处理仿真计算。初始线程数量大小的经验公式为T=C/P,其中T为线程池大小,C为服务器CPU的数量,P为密集计算任务所占时间的比重。初始时所有线程都是闲置的,没有任务执行。设定线程池的最少线程数min和最大线程数max,确保在任一时刻仿真计算线程数(包括空闲的和正在执行任务的)都不小于min个且不大于max个。设定最大空闲等待时间idle,当一个仿真线程连续处于空闲状态的时间超过idle,且当前线程池中线程数大于min时,结束该线程。当有新的仿真计算任务需要执行时,分为以下三种情形:A certain number of threads are created in advance in the internal server, dedicated to processing simulation calculations. The empirical formula for the initial number of threads is T=C/P, where T is the size of the thread pool, C is the number of server CPUs, and P is the proportion of time occupied by intensive computing tasks. Initially all threads are idle and no tasks are executed. Set the minimum number of threads min and the maximum number of threads max in the thread pool to ensure that the number of simulation computing threads (including idle and executing tasks) at any time is not less than min and not greater than max. Set the maximum idle waiting time idle. When a simulation thread is idle for more than idle and the number of threads in the current thread pool is greater than min, the thread will end. When a new simulation calculation task needs to be executed, it can be divided into the following three situations:
如果线程池中有空闲线程,从中随机选择一个空闲的线程执行该任务。If there are idle threads in the thread pool, randomly select an idle thread to execute the task.
如果线程池中没有空闲线程,但当前线程数小于max,则创建一个新线程执行任务,并把该线程加入线程池中。If there is no idle thread in the thread pool, but the current number of threads is less than max, create a new thread to execute the task, and add the thread to the thread pool.
如果线程池中没有空闲线程,且当前线程数大于等于max,则将任务加入任务队列的末尾。任务队列采用先进先出的服务方式,当线程池中有一个线程执行完仿真计算任务并进入空闲状态后,从任务队列的头部取出第一个任务,并在线程池中选择一个空闲线程执行它。If there is no idle thread in the thread pool, and the current number of threads is greater than or equal to max, add the task to the end of the task queue. The task queue adopts the first-in-first-out service mode. When a thread in the thread pool finishes executing the simulation calculation task and enters the idle state, the first task is taken out from the head of the task queue, and an idle thread is selected in the thread pool to execute it.
优选地,当内部服务器执行完仿真计算任务后,线程不会立即销毁,而是重新标记为空闲状态。因此,使用该技术的好处是可以重复利用线程,减少线程创建和关闭带来的开销,从而减少处理用户请求所需的时间。Preferably, after the internal server finishes executing the simulated calculation task, the thread will not be destroyed immediately, but re-marked as an idle state. Therefore, the advantage of using this technology is that threads can be reused, reducing the overhead caused by thread creation and closing, thereby reducing the time required to process user requests.
优选地,所述的采用键值对形式保存数据的具体实现方法包括:Preferably, the specific implementation method for storing data in the form of key-value pairs includes:
每个数据条目包含两个字段,第一个字段称为“键”,用于记录用户的ID,第二个字段为一个列表,初始为空。当数据库服务器接收到内部服务器发送的仿真记录时,判断数据库中是否已存在键与当前用户ID相同的条目,若已存在,直接向该条目的列表中插入该仿真记录;若不存在,先以该用户ID为键创建新条目,再向列表字段中插入仿真记录。为避免存储的数据量过大,造成数据库服务器空间不足,规定每个用户ID最多只能保存5条仿真记录。每次向列表字段插入新的仿真记录时,若列表中元素个数已达到5,则删除最早插入的仿真记录,再插入新的仿真记录。Each data entry contains two fields, the first field is called "key", which is used to record the user's ID, and the second field is a list, which is initially empty. When the database server receives the simulation record sent by the internal server, it judges whether there is an entry with the same key as the current user ID in the database. If it exists, insert the simulation record directly into the list of the entry; if it does not exist, first use Create a new entry with the user ID as the key, and insert a dummy record into the list field. In order to avoid the large amount of stored data, resulting in insufficient space on the database server, it is stipulated that each user ID can only save up to 5 simulation records. Every time a new simulation record is inserted into the list field, if the number of elements in the list has reached 5, delete the earliest inserted simulation record, and then insert a new simulation record.
在本发明实施例中,通过配置多台服务器来处理用户请求,合理地分派用户请求,平衡各服务器之间的负担,避免大量的请求堆积在一台服务器上。在此基础上,结合调度算法来平衡各服务器的运算压力,从而降低服务器的平均响应时间,使得每台服务器都可以保持较高的运算效率。同时,服务器内部采用了多线程模型,可以并行执行多个仿真计算任务,减少了用户的阻塞等待时间。另外,为了应对不稳定的网络状态,考虑到仿真结果在传输过程中丢失的可能性,还将仿真结果备份到服务器,以备不时之需。这样,可以有效提高运动仿真平台在高并发请求下的处理能力和效率。In the embodiment of the present invention, multiple servers are configured to process user requests, user requests are allocated reasonably, loads among servers are balanced, and a large number of requests are not piled up on one server. On this basis, the scheduling algorithm is combined to balance the computing pressure of each server, thereby reducing the average response time of the server, so that each server can maintain a high computing efficiency. At the same time, the server adopts a multi-threading model, which can execute multiple simulation calculation tasks in parallel, reducing the user's blocking waiting time. In addition, in order to cope with unstable network conditions, considering the possibility of simulation results being lost during transmission, the simulation results are also backed up to the server in case of emergency. In this way, the processing capability and efficiency of the motion simulation platform under high concurrent requests can be effectively improved.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention. Those skilled in the art can also obtain other drawings based on these drawings without creative work.
图1是本发明实施例的一种基于运动仿真平台高并发请求的解决方法的流程示意图;Fig. 1 is a schematic flow diagram of a solution to high concurrent requests based on a motion simulation platform according to an embodiment of the present invention;
图2是本发明实施例中配置服务器群的结构示意图;Fig. 2 is a schematic structural diagram of configuring a server group in an embodiment of the present invention;
图3是本发明实施例中内部服务器处理用户请求时使用的模型示意图。Fig. 3 is a schematic diagram of a model used by an internal server to process a user request in an embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention.
图1是本发明实施例的一种基于运动仿真平台高并发请求的解决方法的流程示意图,如图1所示,该方法包括:Fig. 1 is a schematic flow diagram of a solution to high concurrent requests based on a motion simulation platform according to an embodiment of the present invention. As shown in Fig. 1, the method includes:
A1:配置服务器群,接收用户请求;A1: Configure the server group and receive user requests;
A2:进行仿真前处理,启用多线程处理用户请求;A2: Perform pre-simulation processing and enable multi-threading to process user requests;
A3:进行服装热湿传输计算和人体热调节计算;A3: Carry out clothing heat and moisture transfer calculations and human body heat regulation calculations;
A4:反馈仿真结果,并将仿真记录备份到数据库中。A4: Feedback the simulation results and back up the simulation records to the database.
其中,A1进一步包括:Among them, A1 further includes:
A11:配置服务器集群,包括一台调度服务器、一台数据库服务器和多台内部服务器。调度服务器的ip地址对外公开,数据库服务器和所有内部服务器的ip地址都不公开;A11: Configure server clusters, including a scheduling server, a database server and multiple internal servers. The IP address of the scheduling server is public, and the IP addresses of the database server and all internal servers are not public;
A12:调度服务器与每台内部服务器之间均使用TCP连接池维持通信;A12: The TCP connection pool is used to maintain communication between the scheduling server and each internal server;
A13:调度服务器接收所有用户的请求,但不进行实际处理。调度服务器与用户端之间采用长连接。调度服务器使用最少连接策略把用户请求转发给内部服务器处理。A13: The scheduling server receives all user requests, but does not actually process them. A long connection is used between the scheduling server and the client. The scheduling server forwards user requests to the internal server for processing using the least connection strategy.
A2进一步包括:A2 further includes:
A21:内部服务器监听特定端口,多路分离线程监测来自调度服务器转发的用户请求,交给网络IO线程读取并解析用户数据。解析用户数据的过程不阻塞监听端口。A21: The internal server monitors a specific port, and the multi-channel separation thread monitors the user request forwarded from the scheduling server, and hands it to the network IO thread to read and parse user data. The process of parsing user data does not block the listening port.
A22:网络IO线程从用户请求数据包中获取仿真计算所需的参数,包括用户ID、身体特征参数、服装设定参数、运动计划参数和外界环境参数。A22: The network IO thread obtains the parameters required for simulation calculation from the user request packet, including user ID, body feature parameters, clothing setting parameters, motion plan parameters and external environment parameters.
A23:网络IO线程解析完用户输入参数后,检测仿真计算线程池中是否有空闲线程,有的话选择一个空闲线程,向其传入A22中获取的仿真计算参数,转A3进行服装热湿传输计算和人体热调节计算;否则将用户请求加入仿真任务队列。A23: After the network IO thread parses the user input parameters, it detects whether there is an idle thread in the simulation calculation thread pool, and if so, selects an idle thread, passes the simulation calculation parameters obtained in A22 to it, and transfers to A3 for clothing heat and humidity transmission Calculation and human thermal regulation calculation; otherwise, add user request to simulation task queue.
其中,A4进一步包括:Among them, A4 further includes:
A41:仿真计算线程执行完毕后,通知多路分离线程。多路分离线程监听到事件发生,获取结果数据并调用网络IO线程处理。结果数据包括各时间节点各人体部位的温度值和出汗率。A41: After the simulation computing thread finishes executing, notify the demultiplexing thread. The demultiplexing thread listens to the occurrence of the event, obtains the result data and calls the network IO thread for processing. The result data includes the temperature value and sweat rate of each body part at each time node.
A42:网络IO线程利用TCP连接池将结果数据发送回调度服务器。A42: The network IO thread uses the TCP connection pool to send the result data back to the scheduling server.
A43:调度服务器接收到结果数据后,为了不暴露内部服务器的真实ip地址,将数据包的源地址改为调度服务器的ip地址,再发送给用户。A43: After the scheduling server receives the result data, in order not to expose the real ip address of the internal server, change the source address of the data packet to the ip address of the scheduling server, and then send it to the user.
A44:网络IO线程将仿真记录发送给数据库服务器。仿真记录包括用户ID、用户输入参数和A41中获得的结果数据。A44: The network IO thread sends the simulation record to the database server. The simulation record includes the user ID, user input parameters, and result data obtained in A41.
A45:采用键值对形式保存数据,以用户ID作为键。数据库服务器判断数据库中是否已存在键与当前用户ID相同的条目,若已存在,直接向该条目的列表中加入新的仿真记录;若不存在,先以该用户ID为键创建新条目,再向列表字段中加入仿真记录。A45: Save data in the form of key-value pairs, with the user ID as the key. The database server judges whether an entry with the same key as the current user ID already exists in the database. If it exists, it will directly add a new simulation record to the list of the entry; if it does not exist, first create a new entry with the user ID as the key, and then Add a simulation record to the list field.
在具体实施例中,A12所述的TCP连接池具体实现方法为:In a specific embodiment, the specific implementation method of the TCP connection pool described in A12 is:
调度服务器与每台内部服务器之间均先提前创建若干个TCP连接。其中每个TCP连接的源端口为调度服务器的本地端口,且互不重复;目标端口为内部服务器的指定端口,即仿真主程序监听的端口。设定连接池的最少连接数min,确保与每台内部服务器之间的TCP连接在每一时刻都至少有min个。设定连接池的最大连接数max,确保与每台内部服务器之间的TCP连接不超过max个。设定连接的空闲等待时间idle,当一个TCP连接连续处于空闲状态的时间超过idle,且当前TCP连接数大于min时,关闭该连接。当调度服务器需要向内部服务器发送数据时,先检查是否有空闲的TCP连接,如果有,选择一个空闲的连接发送数据。如果没有空闲的连接,则需判断调度服务器与该内部服务器之间的已有连接数是否达到了max,如果没达到,则新建一个TCP连接发送数据;如果已达到max,则将数据加入等待队列,等到有空闲连接时再处理。每个TCP连接在发送完数据后都不会立即关闭。Several TCP connections are created in advance between the scheduling server and each internal server. The source port of each TCP connection is the local port of the dispatching server and is not repeated; the destination port is the designated port of the internal server, that is, the port monitored by the simulation main program. Set the minimum connection number min of the connection pool to ensure that there are at least min TCP connections with each internal server at each moment. Set the maximum connection number max of the connection pool to ensure that the number of TCP connections with each internal server does not exceed max. Set the idle waiting time of the connection to idle. When a TCP connection is idle for more than idle and the number of current TCP connections is greater than min, the connection will be closed. When the scheduling server needs to send data to the internal server, it first checks whether there is an idle TCP connection, and if so, selects an idle connection to send data. If there is no idle connection, it is necessary to judge whether the number of existing connections between the scheduling server and the internal server has reached max, if not, create a new TCP connection to send data; if it has reached max, add the data to the waiting queue , wait until there is an idle connection to process. Each TCP connection is not closed immediately after sending data.
具体地,A13中所述的长连接具体实现方法如下:Specifically, the specific implementation method of the long connection described in A13 is as follows:
服务器在接收完用户数据后,不会立即与用户端断开连接,而是启动一个计时器,在计时结束前都会保持连接。如果计时过程中,服务器再次接收到用户请求,则重新开始计时。如果直到计时结束,用户端都没有再向服务器发送请求,那么计时结束后关闭连接。After the server receives user data, it will not disconnect from the client immediately, but will start a timer, and the connection will be kept until the timer expires. If the server receives a user request again during the timing, it restarts timing. If the client does not send a request to the server until the timer expires, the connection is closed after the timer expires.
A13中所述的最少连接策略具体实现方法如下:The specific implementation method of the least connection strategy described in A13 is as follows:
调度服务器需记录它向每台内部服务器转发的活跃请求数量。初始时,调度服务器与所有内部服务器之间的活跃请求数均为0。当一个请求被转发到某台内部服务器处理,或请求被加入到该内部服务器的TCP连接池的等待队列时,调度服务器与该内部服务器之间的活跃请求数加一;当一个内部服务器的请求处理结束或连接因超时中止时,调度服务器与该内部服务器之间的活跃请求数减一。每次接收到请求后,调度服务器会优先选择活跃请求数最少的内部服务器转发。当有若干个内部服务器的活跃请求数相同时,则需要查看调度服务器与这些内部服务器的TCP连接池中的空闲连接个数,优先选择空闲连接个数最多的内部服务器转发。如果仍有若干个内部服务器的空闲连接个数并列最多,则在这些服务器中随机选择一个转发。The dispatch server keeps track of the number of active requests it forwards to each internal server. Initially, the number of active requests between the dispatch server and all internal servers is 0. When a request is forwarded to an internal server for processing, or the request is added to the waiting queue of the internal server's TCP connection pool, the number of active requests between the scheduling server and the internal server is increased by one; when an internal server's request When processing ends or the connection is aborted due to a timeout, the number of active requests between the dispatch server and this internal server is decremented by one. Each time a request is received, the scheduling server will give priority to the internal server with the least number of active requests to forward. When several internal servers have the same number of active requests, it is necessary to check the number of idle connections in the TCP connection pool between the scheduling server and these internal servers, and preferentially select the internal server with the largest number of idle connections for forwarding. If there are still several internal servers with the highest number of idle connections, one of these servers is randomly selected for forwarding.
具体地,A21中所述的多路分离线程具体实现功能如下:Specifically, the specific implementation functions of the multi-channel separation thread described in A21 are as follows:
多路分离线程的主要职责是监听系统中发生的事件,并将事件转发给相应的函数处理。它的内部维护一个监听列表。每当内部服务器与调度服务器之间的TCP连接池有新连接建立时,将该连接的socket加入到监听列表中;每当TCP连接池中有连接关闭时,从监听列表中删除该连接的socket。监听列表中的对象还包括所有仿真计算的工作线程。多路分离线程调用操作系统的select、epoll等方法循环等待事件发生,关注的事件包括两种:用户数据到来、仿真计算完毕。每当有事件发生,多路分离线程就能从操作系统得到通知,并通过监听列表获取到该事件对应的socket或者线程列表,将事件分派给相应的函数进行处理,然后继续等待下一个事件。The main responsibility of the demultiplexed thread is to monitor the events that occur in the system, and forward the events to the corresponding functions for processing. It maintains a listener list internally. Whenever a new connection is established in the TCP connection pool between the internal server and the scheduling server, the socket of the connection is added to the listening list; whenever a connection in the TCP connection pool is closed, the socket of the connection is deleted from the listening list . The objects in the listener list also include all worker threads for simulation calculations. The multi-channel separation thread calls the select, epoll and other methods of the operating system to wait for events to occur in a loop. The events of concern include two types: the arrival of user data and the completion of simulation calculations. Whenever an event occurs, the demultiplexed thread can be notified from the operating system, and obtain the socket or thread list corresponding to the event through the listening list, dispatch the event to the corresponding function for processing, and then continue to wait for the next event.
具体地,A21中所述的网络IO线程具体实现功能如下:Specifically, the specific implementation functions of the network IO thread described in A21 are as follows:
网络IO线程负责从用户请求数据包中提取出仿真计算所需的参数,以及在仿真计算完毕后,将结果数据发送会调度服务器。网络IO线程和多路分离线程可以并发执行,所以解析用户数据的过程不会阻塞监听端口。The network IO thread is responsible for extracting the parameters required for the simulation calculation from the user request data packet, and sending the result data to the scheduling server after the simulation calculation is completed. The network IO thread and the demultiplexing thread can be executed concurrently, so the process of parsing user data will not block the listening port.
进一步地,A23中所述的仿真计算线程池具体实现方法如下:Further, the specific implementation method of the simulation computing thread pool described in A23 is as follows:
在内部服务器中提前创建一定数量的线程,专门用于处理仿真计算。初始线程数量大小的经验公式为T=C/P,其中T为线程池大小,C为服务器CPU的数量,P为密集计算任务所占时间的比重。初始时所有线程都是闲置的,没有任务执行。设定线程池的最少线程数min和最大线程数max,确保在任一时刻仿真计算线程数(包括空闲的和正在执行任务的)都不小于min个且不大于max个。设定最大空闲等待时间idle,当一个仿真线程连续处于空闲状态的时间超过idle,且当前线程池中线程数大于min时,结束该线程。当有新的仿真计算任务需要执行时,分为以下三种情形:A certain number of threads are created in advance in the internal server, dedicated to processing simulation calculations. The empirical formula for the initial number of threads is T=C/P, where T is the size of the thread pool, C is the number of server CPUs, and P is the proportion of time occupied by intensive computing tasks. Initially all threads are idle and no tasks are executed. Set the minimum number of threads min and the maximum number of threads max in the thread pool to ensure that the number of simulation computing threads (including idle and executing tasks) at any time is not less than min and not greater than max. Set the maximum idle waiting time idle. When a simulation thread is idle for more than idle and the number of threads in the current thread pool is greater than min, the thread will end. When a new simulation calculation task needs to be executed, it can be divided into the following three situations:
1、如果线程池中有空闲线程,从中随机选择一个空闲的线程执行该任务。1. If there are idle threads in the thread pool, randomly select an idle thread to execute the task.
2、如果线程池中没有空闲线程,但当前线程数小于max,则创建一个新线程执行任务,并把该线程加入线程池中。2. If there is no idle thread in the thread pool, but the current number of threads is less than max, create a new thread to execute the task and add the thread to the thread pool.
3、如果线程池中没有空闲线程,且当前线程数大于等于max,则将任务加入任务队列的末尾。任务队列采用先进先出的服务方式,当线程池中有一个线程执行完仿真计算任务并进入空闲状态后,从任务队列的头部取出第一个任务,并在线程池中选择一个空闲线程执行它。3. If there is no idle thread in the thread pool, and the current number of threads is greater than or equal to max, add the task to the end of the task queue. The task queue adopts the first-in-first-out service mode. When a thread in the thread pool finishes executing the simulation calculation task and enters the idle state, the first task is taken out from the head of the task queue, and an idle thread is selected in the thread pool to execute it.
在具体实施例中,当内部服务器执行完仿真计算任务后,线程不会立即销毁,而是重新标记为空闲状态。因此,使用该技术的好处是可以重复利用线程,减少线程创建和关闭带来的开销,从而减少处理用户请求所需的时间。In a specific embodiment, after the internal server completes the simulation calculation task, the thread will not be destroyed immediately, but will be re-marked as an idle state. Therefore, the advantage of using this technology is that threads can be reused, reducing the overhead caused by thread creation and closing, thereby reducing the time required to process user requests.
具体地,A45中所述的采用键值对形式保存数据的具体实现方法如下:Specifically, the specific implementation method for storing data in the form of key-value pairs described in A45 is as follows:
每个数据条目包含两个字段,第一个字段称为“键”,用于记录用户的ID,第二个字段为一个列表,初始为空。当数据库服务器接收到内部服务器发送的仿真记录时,判断数据库中是否已存在键与当前用户ID相同的条目,若已存在,直接向该条目的列表中插入该仿真记录;若不存在,先以该用户ID为键创建新条目,再向列表字段中插入仿真记录。为避免存储的数据量过大,造成数据库服务器空间不足,规定每个用户ID最多只能保存5条仿真记录。每次向列表字段插入新的仿真记录时,若列表中元素个数已达到5,则删除最早插入的仿真记录,再插入新的仿真记录。Each data entry contains two fields, the first field is called "key", which is used to record the user's ID, and the second field is a list, which is initially empty. When the database server receives the simulation record sent by the internal server, it judges whether there is an entry with the same key as the current user ID in the database. If it exists, insert the simulation record directly into the list of the entry; if it does not exist, first use Create a new entry with the user ID as the key, and insert a dummy record into the list field. In order to avoid the large amount of stored data, resulting in insufficient space on the database server, it is stipulated that each user ID can only save up to 5 simulation records. Every time a new simulation record is inserted into the list field, if the number of elements in the list has reached 5, delete the earliest inserted simulation record, and then insert a new simulation record.
在本发明实施例中,通过配置多台服务器来处理用户请求,合理地分派用户请求,平衡各服务器之间的负担,避免大量的请求堆积在一台服务器上。在此基础上,结合调度算法来平衡各服务器的运算压力,从而降低服务器的平均响应时间,使得每台服务器都可以保持较高的运算效率。同时,服务器内部采用了多线程模型,可以并行执行多个仿真计算任务,减少了用户的阻塞等待时间。另外,为了应对不稳定的网络状态,考虑到仿真结果在传输过程中丢失的可能性,还将仿真结果备份到服务器,以备不时之需。这样,可以有效提高运动仿真平台在高并发请求下的处理能力和效率。In the embodiment of the present invention, multiple servers are configured to process user requests, user requests are allocated reasonably, loads among servers are balanced, and a large number of requests are not piled up on one server. On this basis, the scheduling algorithm is combined to balance the computing pressure of each server, thereby reducing the average response time of the server, so that each server can maintain a high computing efficiency. At the same time, the server adopts a multi-threading model, which can execute multiple simulation calculation tasks in parallel, reducing the user's blocking waiting time. In addition, in order to cope with unstable network conditions, considering the possibility of simulation results being lost during transmission, the simulation results are also backed up to the server in case of emergency. In this way, the processing capability and efficiency of the motion simulation platform under high concurrent requests can be effectively improved.
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取存储器(RAM,RandomAccess Memory)、磁盘或光盘等。Those of ordinary skill in the art can understand that all or part of the steps in the various methods of the above-mentioned embodiments can be completed by instructing related hardware through a program, and the program can be stored in a computer-readable storage medium, and the storage medium can include: Read Only Memory (ROM, Read Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.
另外,以上对本发明实施例所提供的一种基于运动仿真平台高并发请求的解决方法进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。In addition, a solution to high concurrent requests based on the motion simulation platform provided by the embodiment of the present invention has been introduced in detail above. In this paper, specific examples are used to illustrate the principle and implementation of the present invention. The description of the above embodiment It is only used to help understand the method of the present invention and its core idea; at the same time, for those of ordinary skill in the art, according to the idea of the present invention, there will be changes in the specific implementation and scope of application. In summary, The contents of this description should not be construed as limiting the present invention.
Claims (9)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201711260988.5A CN108200118A (en) | 2017-12-04 | 2017-12-04 | A kind of solution based on the request of movement simulation platform high concurrent |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201711260988.5A CN108200118A (en) | 2017-12-04 | 2017-12-04 | A kind of solution based on the request of movement simulation platform high concurrent |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN108200118A true CN108200118A (en) | 2018-06-22 |
Family
ID=62573525
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201711260988.5A Pending CN108200118A (en) | 2017-12-04 | 2017-12-04 | A kind of solution based on the request of movement simulation platform high concurrent |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN108200118A (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110188490A (en) * | 2019-06-03 | 2019-08-30 | 珠海格力电器股份有限公司 | Method and device for improving data simulation efficiency, storage medium and electronic device |
| CN110765663A (en) * | 2019-11-25 | 2020-02-07 | 中冶赛迪重庆信息技术有限公司 | Concurrent processing method and system based on parametric simulation |
| CN112148493A (en) * | 2020-09-30 | 2020-12-29 | 武汉中科通达高新技术股份有限公司 | Streaming media task management method and device and data server |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102523249A (en) * | 2011-11-24 | 2012-06-27 | 哈尔滨工业大学 | Distributed long-distance simulation system and simulation method based on Web |
| CN102739799A (en) * | 2012-07-04 | 2012-10-17 | 合一网络技术(北京)有限公司 | Distributed communication method in distributed application |
| US20160184638A1 (en) * | 2014-12-25 | 2016-06-30 | Compal Electronics, Inc. | Fitness transmission device and information processing method |
| CN105743989A (en) * | 2016-03-31 | 2016-07-06 | 宇龙计算机通信科技(深圳)有限公司 | Motion information push method and push apparatus, and terminal |
| US20170279654A1 (en) * | 2015-09-03 | 2017-09-28 | Hitachi, Ltd. | Data Processing System and Data Processing Method |
| CN107251031A (en) * | 2015-01-13 | 2017-10-13 | 戴尔斯生活有限责任公司 | System, method and product for monitoring and strengthening health |
-
2017
- 2017-12-04 CN CN201711260988.5A patent/CN108200118A/en active Pending
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102523249A (en) * | 2011-11-24 | 2012-06-27 | 哈尔滨工业大学 | Distributed long-distance simulation system and simulation method based on Web |
| CN102739799A (en) * | 2012-07-04 | 2012-10-17 | 合一网络技术(北京)有限公司 | Distributed communication method in distributed application |
| US20160184638A1 (en) * | 2014-12-25 | 2016-06-30 | Compal Electronics, Inc. | Fitness transmission device and information processing method |
| CN105743963A (en) * | 2014-12-25 | 2016-07-06 | 仁宝电脑工业股份有限公司 | Fitness transmission device and its information processing method |
| CN107251031A (en) * | 2015-01-13 | 2017-10-13 | 戴尔斯生活有限责任公司 | System, method and product for monitoring and strengthening health |
| US20170279654A1 (en) * | 2015-09-03 | 2017-09-28 | Hitachi, Ltd. | Data Processing System and Data Processing Method |
| CN105743989A (en) * | 2016-03-31 | 2016-07-06 | 宇龙计算机通信科技(深圳)有限公司 | Motion information push method and push apparatus, and terminal |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110188490A (en) * | 2019-06-03 | 2019-08-30 | 珠海格力电器股份有限公司 | Method and device for improving data simulation efficiency, storage medium and electronic device |
| CN110765663A (en) * | 2019-11-25 | 2020-02-07 | 中冶赛迪重庆信息技术有限公司 | Concurrent processing method and system based on parametric simulation |
| CN112148493A (en) * | 2020-09-30 | 2020-12-29 | 武汉中科通达高新技术股份有限公司 | Streaming media task management method and device and data server |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN102650950B (en) | Platform architecture supporting multi-GPU (Graphics Processing Unit) virtualization and work method of platform architecture | |
| WO2015096656A1 (en) | Thread creation method, service request processing method and related device | |
| CN100538646C (en) | A kind of method and apparatus of in distributed system, carrying out the SQL script file | |
| CN107832146A (en) | Thread pool task processing method in highly available cluster system | |
| CN102118261A (en) | Method and device for data acquisition, and network management equipment | |
| CN102546437A (en) | Internet of things platform-oriented socket implementation method | |
| CN105573850B (en) | Multi-process exchange method, system and server | |
| Zhang et al. | A cloud gaming system based on user-level virtualization and its resource scheduling | |
| CN110795254A (en) | Method for processing high-concurrency IO based on PHP | |
| CN104850394B (en) | The management method and distributed system of distributed application program | |
| CN102999317B (en) | Towards the elasticity multi-process service processing method of many tenants | |
| CN108200118A (en) | A kind of solution based on the request of movement simulation platform high concurrent | |
| CN102904961A (en) | Method and system for scheduling cloud computing resources | |
| CN106445636B (en) | A kind of dynamic resource scheduling algorithm under PAAS platform | |
| Dong et al. | Energy-aware scheduling schemes for cloud data centers on google trace data | |
| CN102541622B (en) | Method for placing load-related virtual machine | |
| CN115150471B (en) | Data processing methods, devices, equipment, storage media and program products | |
| CN105528283B (en) | A kind of method that load value is calculated in mobile application detection load-balancing algorithm | |
| CN117271142B (en) | Load balancing method and task scheduler for analyzing probability security analysis model | |
| Convolbo et al. | DRASH: A data replication-aware scheduler in geo-distributed data centers | |
| CN106547566B (en) | Communication service process pool management method and system | |
| Kumar et al. | Effective distributed dynamic load balancing for the clouds | |
| Sakthivelmurugan et al. | Enhanced load balancing technique in public cloud | |
| Patel et al. | Dynamic priority based load balancing technique for VM placement in cloud computing | |
| Gao et al. | Lazy update propagation for data replication in cloud computing |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180622 |