[go: up one dir, main page]

CN103677760B - A kind of event concurrency controller based on Openflow and event concurrency disposal route thereof - Google Patents

A kind of event concurrency controller based on Openflow and event concurrency disposal route thereof Download PDF

Info

Publication number
CN103677760B
CN103677760B CN201310647876.0A CN201310647876A CN103677760B CN 103677760 B CN103677760 B CN 103677760B CN 201310647876 A CN201310647876 A CN 201310647876A CN 103677760 B CN103677760 B CN 103677760B
Authority
CN
China
Prior art keywords
state
flow
task
thread
message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310647876.0A
Other languages
Chinese (zh)
Other versions
CN103677760A (en
Inventor
刘轶
宋平
刘驰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kaixi Beijing Information Technology Co ltd
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201310647876.0A priority Critical patent/CN103677760B/en
Publication of CN103677760A publication Critical patent/CN103677760A/en
Application granted granted Critical
Publication of CN103677760B publication Critical patent/CN103677760B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明公开了一种基于Openflow的事件并行控制器及其事件并行处理方法,该方法将Openflow消息的收发与Openflow事件的处理相分离,利用额外的计算线程对Openflow事件处理进行加速。应用开启后的控制器将建立与交换机的链接,并将链接平均地分给多个I/O线程,每个链接上消息的收发由唯一的I/O线程处理。应用在接收Openflow消息后,触发对应的Openflow事件,并根据事件的类型产生对流对象和状态对象的处理任务,交由不同的线程进行处理。在流事件处理过程中,能够动态产生子任务,并由多个线程并行执行。对共享状态,使用唯一的状态线程进行处理。本发明方法相对于现有的Openflow事件的并行处理方法具有更好的性能可扩展性、以及更简单的数据访问方式。

The invention discloses an Openflow-based event parallel controller and an event parallel processing method thereof. The method separates the sending and receiving of Openflow messages from the processing of Openflow events, and uses extra computing threads to accelerate the processing of Openflow events. After the application is started, the controller will establish a link with the switch, and evenly distribute the link to multiple I/O threads, and the sending and receiving of messages on each link will be handled by a unique I/O thread. After the application receives the Openflow message, it triggers the corresponding Openflow event, and generates the processing tasks of the flow object and the state object according to the type of the event, which are handed over to different threads for processing. During stream event processing, subtasks can be dynamically generated and executed in parallel by multiple threads. For shared state, use a unique state thread for processing. Compared with the existing Openflow event parallel processing method, the method of the present invention has better performance scalability and simpler data access mode.

Description

一种基于Openflow的事件并行控制器及其事件并行处理方法An Openflow-based event parallel controller and its event parallel processing method

技术领域 technical field

本发明涉及一种Openflow控制器,是指一种用软件定义网络领域Openflow控制器、以及对Openflow控制器内部事件的并行处理方法,特别涉及Openflow流事件处理过程内部的并行处理方法。 The invention relates to an Openflow controller, which refers to a software-defined Openflow controller in the network field and a parallel processing method for internal events of the Openflow controller, in particular to a parallel processing method inside the Openflow flow event processing process.

背景技术 Background technique

2008年,OpenFlow技术首次被提出。其思想是将传统网络设备中的数据转发和路由控制两个功能模块相分离,利用集中式的控制器通过标准化的接口对各种网络设备进行管理和配置。OpenFlow技术引起了业界的广泛关注,成为近年来十分热门的技术。由于Openflow技术为网络带来灵活的可编程性,因此该技术已被广泛的应用于校园网、广域网、移动网络以及数据中心网络等多种网络之中。 In 2008, OpenFlow technology was first proposed. The idea is to separate the two functional modules of data forwarding and routing control in traditional network equipment, and use a centralized controller to manage and configure various network equipment through standardized interfaces. OpenFlow technology has attracted widespread attention in the industry and has become a very popular technology in recent years. Since the Openflow technology brings flexible programmability to the network, it has been widely used in various networks such as campus networks, wide area networks, mobile networks, and data center networks.

在2009年12月31日公开的《OpenFlowSwitchSpecification》,OpenNetworkingFoundation组织,在此文献的第4.1节介绍了OpenFlow消息的类型。Openflow消息包括有controller-to-switch(译:控制器向交换机传输的消息)、Asynchronous(译:异步消息)和Symmetric(译:对称消息)。其中异步消息中包括有Packet-in(译:流到达消息)、Flow-Removed(译:流移除消息)、Port-status(译:端口状态消息)和Error(译:错误消息)。 In the "OpenFlowSwitchSpecification" published on December 31, 2009, the OpenNetworking Foundation organization, Section 4.1 of this document introduces the types of OpenFlow messages. Openflow messages include controller-to-switch (translation: the message transmitted by the controller to the switch), Asynchronous (translation: asynchronous message) and Symmetric (translation: symmetric message). The asynchronous messages include Packet-in (translation: flow arrival message), Flow-Removed (translation: flow removal message), Port-status (translation: port status message) and Error (translation: error message).

在2013年3月29日的软件学报中公开了《基于OpenFlow的SDN技术》,左青云等人发表。文中公开了OpenFlow网络主要由OpenFlow交换机、控制器两部分组成。OpenFlow交换机根据流表来转发数据包,代表着数据转发平面;控制器通过全网络视图来实现管控功能,其控制逻辑表示控制平面。每个OpenFlow交换机的处理单元由流表构成,每个流表由许多流表项组成,流表项则代表转发规则。进入交换机的数据包通过查询流表来取得对应的操作。控制器通过维护网络视图(networkview)来维护整个网络的基本信息,如拓扑、网络单元和提供的服务等。运行在控制器之上的应用程序通过调用网络视图中的全局数据,进而操作OpenFlow交换机来对整个网络进行管理和控制。 On March 29, 2013, "SDN technology based on OpenFlow" was published in the Journal of Software, published by Zuo Qingyun et al. It is disclosed in the paper that an OpenFlow network is mainly composed of two parts: an OpenFlow switch and a controller. The OpenFlow switch forwards data packets according to the flow table, which represents the data forwarding plane; the controller realizes the management and control function through the whole network view, and its control logic represents the control plane. The processing unit of each OpenFlow switch is composed of flow tables, and each flow table is composed of many flow entries, and the flow entries represent forwarding rules. The data packet entering the switch obtains the corresponding operation by querying the flow table. The controller maintains the basic information of the entire network by maintaining a network view (networkview), such as topology, network elements, and provided services. The application program running on the controller manages and controls the entire network by calling the global data in the network view and then operating the OpenFlow switch.

Openflow技术的特点使得Openflow控制器端的处理效率成为网络能否正常运行的关键。原始单线程控制器的处理效率远远不能满足大规模Openflow网络的处理需求。因此,现有技术利用多线程,在控制器内部并行地处理Openflow事件,提高控制器的处理效率。 The characteristics of the Openflow technology make the processing efficiency of the Openflow controller the key to the normal operation of the network. The processing efficiency of the original single-threaded controller is far from meeting the processing requirements of large-scale Openflow networks. Therefore, the prior art utilizes multithreading to process Openflow events in parallel within the controller to improve the processing efficiency of the controller.

但现有的Openflow事件并行处理方法,在利用众核环境对规模较大,行为复杂的Openflow网络进行控制时,存在处理性能的可扩展性问题:(1)流事件处理过程不支持并行操作,无法满足时间复杂度高的计算过程;(2)增加线程难以有效提高流事件的处理效率;(3)Openflow事件处理过程中,对共享数据的访问存在耦合,影响性能可扩展性。 However, the existing Openflow event parallel processing method has scalability problems in processing performance when using the many-core environment to control a large-scale Openflow network with complex behavior: (1) The flow event processing process does not support parallel operations, It cannot satisfy the calculation process with high time complexity; (2) It is difficult to effectively improve the processing efficiency of stream events by adding threads; (3) During the Openflow event processing process, there is coupling in access to shared data, which affects performance scalability.

本发明针对上述问题,在Openflow控制器内部,针对Openflow事件,尤其是Openflow流事件,提出了一种新的事件并行控制器及其事件并行处理方法。 Aiming at the above problems, the present invention proposes a new event parallel controller and its event parallel processing method for Openflow events, especially Openflow flow events, within the Openflow controller.

发明内容 Contents of the invention

本发明的目的之一是提供一种基于Openflow的事件并行控制器,该控制器是利用众核环境,在大规模Openflow网络场景下,利用I/O线程并行地对Openflow消息进行收发,利用计算线程对Openflow事件的处理进行加速,增加流事件处理过程内部的并行支持,增强Openflow控制器的计算能力,提高性能可扩展性。 One of the purposes of the present invention is to provide a parallel event controller based on Openflow. The controller utilizes a many-core environment, and in a large-scale Openflow network scenario, uses I/O threads to send and receive Openflow messages in parallel. The thread accelerates the processing of Openflow events, increases the internal parallel support of the flow event processing process, enhances the computing power of the Openflow controller, and improves performance scalability.

本发明的目的之二是提出一种基于Openflow的事件并行处理方法,该方法使用多个线程并行地对Openflow消息进行收发;当收到Openflow消息后,触发对应的Openflow事件;针对流事件及其对应的处理方法,生成对该流事件的处理任务,由流-线程并行执行;针对其他类型事件及其对应的处理方法,生成针对共享状态的处理任务,由状态-线程并行执行;流事件的处理过程内部,可以动态的产生子任务,通过任务窃取的形式,多个线程可以并行地对同一个流事件进行处理。 The second object of the present invention is to propose a parallel event processing method based on Openflow, which uses a plurality of threads to send and receive Openflow messages in parallel; after receiving the Openflow message, trigger the corresponding Openflow event; The corresponding processing method generates a processing task for the stream event, which is executed in parallel by the stream-thread; for other types of events and their corresponding processing methods, generates a processing task for the shared state, which is executed in parallel by the state-thread; Within the processing process, subtasks can be dynamically generated, and through task stealing, multiple threads can process the same stream event in parallel.

本发明是一种基于Openflow的事件并行控制器,该控制器包括有流处理模块(1)、状态处理模块(2)和Openflow消息分配控制模块(3); The present invention is an event parallel controller based on Openflow, which includes a flow processing module (1), a state processing module (2) and an Openflow message distribution control module (3);

Openflow消息分配控制模块(3)第一方面采用异步非阻塞IO模型从链接的接收缓冲区中接收Openflow交换机(4)发送的Openflow消息;所述Openflow消息中包括有Packet-in消息、Flow-Removed消息、Port-status消息和Error消息。 The Openflow message distribution control module (3) first adopts the asynchronous non-blocking IO model to receive the Openflow message sent by the Openflow switch (4) from the receiving buffer of the link; the Openflow message includes Packet-in message, Flow-Removed message, Port-status message and Error message.

Openflow消息分配控制模块(3)第二方面将流处理任务 TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } 发送到流处理模块(1)的主线程本地任务队列Qz中; Openflow message distribution control module (3) The second aspect will be the flow processing task TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Removed FLOW Base _ flow } Sent to the main thread local task queue Q z of the stream processing module (1);

所述流处理任务 TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } 的获取是:(A)首先依据Packet-in消息触发Packet-in事件;然后根据Packet-in事件生成Base_flow结构的流对象FLOWBase_flow={F1,F2,…,Ff};最后根据Base_flow结构中start方法生成所述Packet-in事件对应的流处理任务(B)首先依据Flow-Removed消息触发Flow-Removed事件;然后根据Flow-Removed事件生成如表1中Base_flow结构的流对象FLOWBase_flow={F1,F2,…,Ff};最后根据Base_flow结构中start方法生成所述Flow-Removed事件对应的流处理任务 FA Flow - Removed FLOW Base _ flow . The stream processing task TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Removed FLOW Base _ flow } The acquisition is: (A) first trigger the Packet-in event according to the Packet-in message; then generate the flow object FLOW Base_flow of the Base_flow structure according to the Packet-in event; Base_flow = {F 1 , F 2 ,...,F f }; The start method in the structure generates the stream processing task corresponding to the Packet-in event (B) First trigger the Flow-Removed event according to the Flow-Removed message; then generate the flow object FLOW Base_flow with the Base_flow structure in Table 1 according to the Flow-Removed event; Base_flow = {F 1 , F 2 ,...,F f }; The start method in the structure generates the flow processing task corresponding to the Flow-Removed event FA flow - Removed FLOW Base _ flow .

Openflow消息分配控制模块(3)第三方面将状态处理任务 TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } 发送到状态处理模块(2)的访问任务队列 P state STAT E Base _ state = { P 1 , P 2 , . . . , P s } 中; Openflow message distribution control module (3) The third aspect will be the state processing task TASK 3 - 2 = { SA port - status STATE Base _ state , SA error STATE Base _ state } Sent to the access task queue of the state processing module (2) P state STAT E. Base _ state = { P 1 , P 2 , . . . , P the s } middle;

所述状态处理任务 TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } 的获取是:(A)首先依据Port-status消息触发Port-status事件;然后根据Port-status事件生成Base_state结构的状态对象STATEBase_state={S1,S2,…,Ss}的处理任务,即Port-status状态处理任务记为(B)首先依据Error消息触发Error事件;然后根据Error事件生成针对如表2中Base_state结构的状态对象STATEBase_state={S1,S2,…,Ss}的处理任务,即Error状态处理任务记为 SA Error STAT E Base _ state . The state processing task TASK 3 - 2 = { SA port - status STATE Base _ state , SA error STATE Base _ state } The acquisition is: (A) first trigger the Port-status event according to the Port-status message; then generate the processing task of the state object STATE Base_state = {S 1 , S 2 ,...,S s } of the Base_state structure according to the Port-status event, That is, the Port-status status processing task is recorded as (B) First trigger the Error event based on the Error message; then generate the processing task for the state object STATE Base_state = {S 1 , S 2 ,..., S s } of the Base_state structure in Table 2 according to the Error event, that is, the Error state processing task recorded as SA error STAT E. Base _ state .

Openflow消息分配控制模块(3)第四方面接收流处理模块(1)输出的controller-to-switch消息; The fourth aspect of the Openflow message distribution control module (3) receives the controller-to-switch message output by the flow processing module (1);

Openflow消息分配控制模块(3)第五方面采用异步非阻塞IO模型从消息-线程TH3={C1,C2,…,Cc}中所属的链接的发送缓冲区中向Openflow交换机(4)输出controller-to-switch消息。 Openflow message distribution control module (3) The fifth aspect adopts the asynchronous non-blocking IO model from the message-thread TH 3 = the link in {C 1 , C 2 ,...,C c } output the controller-to-switch message to the Openflow switch (4) in the send buffer of

流处理模块(1)第一方面用于接收Openflow消息分配控制模块(3)输出的流处理任务 TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } ; The first aspect of the stream processing module (1) is used to receive the stream processing tasks output by the Openflow message distribution control module (3) TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Removed FLOW Base _ flow } ;

流处理模块(1)第二方面将 TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } 保存到主线程本地任务队列Qz中; The second aspect of the stream processing module (1) will TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Removed FLOW Base _ flow } Save to the main thread local task queue Q z ;

流处理模块(1)第三方面将 TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } 通过轮询的方式发送到计算线程本地任务队列中; The third aspect of the stream processing module (1) will TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Removed FLOW Base _ flow } Send to the computing thread local task queue by polling middle;

流处理模块(1)第四方面执行 TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } 中的具体任务;并动态生成对流对象FLOWBase_flow={F1,F2,…,Ff}的处理任务,记为流对象子任务将所述添加到 Q T H 1 = { Q 1 , Q 2 , . . . , Q a } 中; The fourth aspect of stream processing module (1) execution TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Removed FLOW Base _ flow } The specific task in; and dynamically generate the processing task of the convection object FLOW Base_flow = {F 1 , F 2 ,..., F f }, which is recorded as the flow object subtask will be described add to Q T h 1 = { Q 1 , Q 2 , . . . , Q a } middle;

流处理模块(1)第五方面执行 TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } 中的具体任务;并动态生成对状态对象STATEBase_state={S1,S2,…,Ss}的处理任务,记为状态对象子任务根据所述的的global属性的值,进行判断;如果global为true,则表示该状态为全局共享状态,则将所述的给予状态处理模块(2),并等待状态处理模块(2)的任务完成消息STA2-1;反之,如果global不为true,则表示该状态为局部共享状态,则流处理模块(1)中的产生所述的线程直接执行; Stream processing module (1) fifth aspect execution TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Removed FLOW Base _ flow } Specific tasks in ; and dynamically generate processing tasks for state objects STATE Base_state = {S 1 , S 2 ,..., S s }, recorded as state object subtasks according to the The value of the global attribute is judged; if global is true, it means that the state is a global shared state, and the described Give the state processing module (2), and wait for the task completion message STA 2-1 of the state processing module (2); otherwise, if global is not true, it means that the state is a local shared state, and the stream processing module (1) the generation of The thread executes directly;

流处理模块(1)第六方面通过任务窃取的方式进行计算线程的任务负载均衡。 The sixth aspect of the stream processing module (1) implements task load balancing of computing threads by way of task stealing.

流处理模块(1)第七方面输出controller-to-switch消息给Openflow消息分配模块(3)。计算线程将需要输出的controller-to-switch消息同步地写入到消息-线程TH3={C1,C2,…,Cc}中所属的链接的发送缓冲区中。 The seventh aspect of the stream processing module (1) outputs the controller-to-switch message to the Openflow message distribution module (3). The calculation thread synchronously writes the controller-to-switch message that needs to be output to the link in the message-thread TH 3 ={C 1 ,C 2 ,…,C c } in the send buffer.

状态处理模块(2)第一方面接收Openflow消息模块(3)发出的状态处理任务 TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } , 并将 TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } 保存到状态对象STATEBase_state={S1,S2,…,Ss}的访问任务队列中; The status processing module (2) first receives the status processing tasks sent by the Openflow message module (3) TASK 3 - 2 = { SA port - status STATE Base _ state , SA error STATE Base _ state } , and will TASK 3 - 2 = { SA port - status STATE Base _ state , SA error STATE Base _ state } Save to the access task queue of the state object STATE Base_state = {S 1 ,S 2 ,…,S s } middle;

状态处理模块(2)第二方面接收流处理模块(1)发出的状态处理任务并将保存到状态对象STATEBase_state={S1,S2,…,Ss}的访问任务队列中; The second aspect of the state processing module (2) receives the state processing tasks sent by the stream processing module (1) and will Save to the access task queue of the state object STATE Base_state = {S 1 ,S 2 ,…,S s } middle;

状态处理模块(2)第三方面状态-线程TH2={B1,B2,…,Bb}中的B1中提取出属于B1的访问任务队列 P state B 1 = { P 1 , P 2 , . . . , P s } ; 然后B1通过轮询的方式执行 P state B 1 = { P 1 , P 2 , . . . , P s } 中的任务;当执行完成后,向流处理模块1发送的任务完成消息 State processing module (2) third aspect state - thread TH 2 = B 1 in {B 1 ,B 2 ,...,B b } from Extract the access task queue belonging to B1 from P state B 1 = { P 1 , P 2 , . . . , P the s } ; Then B 1 executes by polling P state B 1 = { P 1 , P 2 , . . . , P the s } tasks in ; when execution completes After that, the task completion message sent to stream processing module 1

状态-线程TH2={B1,B2,…,Bb}中的B2中提取出属于B2的访问任务队列然后B2通过轮询的方式执行中的任务;当执行完成后,向流处理模块(1)发送的任务完成消息 State - Thread TH 2 = B 2 in {B 1 ,B 2 ,...,B b } starts from Extract the access task queue belonging to B2 from Then B 2 executes by polling tasks in ; when execution completes After that, the task completion message sent to the stream processing module (1)

状态-线程TH2={B1,B2,…,Bb}中的Bb中提取出属于Bb的访问任务队列然后Bb通过轮询的方式执行中的任务;当执行完成后,向流处理模块1发送的任务完成消息 State - Thread TH 2 = B b in {B 1 ,B 2 ,...,B b } starts from Extract the access task queue belonging to B b from Then B b executes by polling tasks in ; when execution completes After that, the task completion message sent to stream processing module 1

对于状态处理模块(2)第四方面向流处理模块(1)发送的任务完成消息集合记为 STA 2 - 1 = { STA 2 - 1 B 1 , STA 2 - 1 B 2 , . . . , STA 2 - 1 B b } . For the task completion message set sent by the fourth aspect of the state processing module (2) to the stream processing module (1), it is denoted as STA 2 - 1 = { STA 2 - 1 B 1 , STA 2 - 1 B 2 , . . . , STA 2 - 1 B b } .

本发明针对Openflow控制器的事件并行处理方法的优点在于: The present invention is aimed at the advantage of the event parallel processing method of Openflow controller:

①本发明在Openflow控制器内部,将事件处理与消息收发相互分离,通过使用计算线程对Openflow事件进行并行处理,在流事件处理过程内部增加并行支持,可以有效地增强Openflow控制器的计算能力,提高控制器处理性能的可扩展性。 1. The present invention separates event processing from message sending and receiving inside the Openflow controller, and parallelizes Openflow events by using computing threads, adding parallel support inside the flow event processing process, which can effectively enhance the computing power of the Openflow controller, Improve scalability of controller processing performance.

②本发明使用状态-线程对共享状态进行唯一地处理,可以使用非互斥的方式对共享数据进行访问,简化了事件处理过程中对共享数据的访问,一定程度上提高了访问效率。 ② The present invention uses the state-thread to uniquely process the shared state, and can access the shared data in a non-mutually exclusive manner, which simplifies the access to the shared data in the event processing process and improves the access efficiency to a certain extent.

附图说明 Description of drawings

图1是本发明基于Openflow的事件并行控制器的结构框图。 Fig. 1 is a structural block diagram of the Openflow-based event parallel controller of the present invention.

图2是本发明Openflow控制器内部的并行处理过程示意图。 Fig. 2 is a schematic diagram of the parallel processing process inside the Openflow controller of the present invention.

图3是基于switch程序的加速比对比图。 Figure 3 is a comparison of speedup ratios based on the switch program.

图4是基于QPAS算法的加速比对比图。 Figure 4 is a comparison chart of the speedup ratio based on the QPAS algorithm.

具体实施方式 detailed description

下面将结合附图对本发明做进一步的详细说明。 The present invention will be further described in detail below in conjunction with the accompanying drawings.

参见图1所示,本发明的一种基于Openflow的事件并行控制器,该控制器包括有流处理模块1、状态处理模块2和Openflow消息分配控制模块3;为了方便下文表述,本发明的控制器简称为POCA。本发明的POCA与现有Openflow控制器配合使用,且内嵌在Openflow网络体系结构中。 Referring to shown in Fig. 1, a kind of event parallel controller based on Openflow of the present invention, this controller includes stream processing module 1, state processing module 2 and Openflow message distribution control module 3; The device is referred to as POCA. The POCA of the present invention cooperates with the existing Openflow controller and is embedded in the Openflow network architecture.

在本发明中,POCA是对Openflow消息中的Packet-in(译:流到达消息)、Flow-Removed(译:流移除消息)、Port-status(译:端口状态消息)和Error(译:错误消息)进行关联的处理。 In the present invention, POCA is for the Packet-in (translation: flow arrival message), Flow-Removed (translation: flow removal message), Port-status (translation: port status message) and Error (translation: port status message) in the Openflow message. error message) for associated processing.

在本发明中,Packet-in(译:流到达消息)对应的事件称为Packet-in事件;Flow-Removed(译:流移除消息)对应的事件称为Flow-Removed事件;Port-status(译:端口状态消息)对应的事件称为Port-status事件;Error(译:错误消息)对应的事件称为Error事件。 In the present invention, the event corresponding to Packet-in (translation: flow arrival message) is called Packet-in event; the event corresponding to Flow-Removed (translation: flow removal message) is called Flow-Removed event; Port-status ( The event corresponding to the port status message) is called the Port-status event; the event corresponding to the Error (translation: error message) is called the Error event.

在本发明中,Base_flow结构的流对象FLOWBase_flow={F1,F2,…,Ff}中F1表示第一种类型的流对象,F2表示第二种类型的流对象,Ff表示最后一种类型的流对象,f为流对象的标识号,为了方便说明,下文将Ff也称为任意一种类型的流对象。 In the present invention, in the flow object FLOW Base_flow ={F 1 , F 2 ,...,F f } of the Base_flow structure, F 1 represents the first type of flow object, F 2 represents the second type of flow object, and F f Indicates the last type of flow object, and f is the identification number of the flow object. For the convenience of description, F f is also referred to as any type of flow object hereinafter.

在本发明中,Base_state结构的状态对象STATEBase_state={S1,S2,…,Ss}中S1表示第一种类型的状态对象,S2表示第二种类型的状态对象,Ss表示最后一种类型的状态对象,s表示状态对象的标识号,为了方便说明,下文将Ss也称为任意一种类型的状态对象。任意一状态对象将对应有唯一的访问任务队列,则第一种类型的状态对象S1对应的访问任务队列记为P1(简称为第一个访问任务队列P1),第二种类型的状态对象S2对应的访问任务队列记为P2(简称为第二个访问任务队列P2),最后一种类型的状态对象Ss对应的访问任务队列记为Ps(简称为最后一个访问任务队列Ps)。访问任务队列采用集合表达形式为 In the present invention, in the state object STATE Base_state ={S 1 , S 2 ,..., S s } of the Base_state structure, S 1 represents the state object of the first type, S 2 represents the state object of the second type, and S s Indicates the last type of state object, and s represents the identification number of the state object. For the convenience of description, S s is also referred to as any type of state object below. Any state object will correspond to a unique access task queue, then the access task queue corresponding to the first type of state object S 1 is recorded as P 1 (referred to as the first access task queue P 1 ), and the second type of The access task queue corresponding to state object S 2 is denoted as P 2 (referred to as the second access task queue P 2 ), and the access task queue corresponding to the last type of state object S s is denoted as P s (referred to as the last access task queue task queue P s ). The access task queue adopts the collection expression form as

在本发明中,流处理模块1中的线程记为流-线程TH1={A1,A2,…,Az,…,Aa}中A1表示流处理模块1中的第一个线程,A2表示流处理模块1中的第二个线程,Az表示流处理模块1中的第z个线程,Aa表示流处理模块1中的最后一个线程,a表示流处理模块1中的线程标识号,为了方便说明,下文将Aa也称为任意一个线程。当流-线程TH1={A1,A2,…,Az,…,Aa}中的某一个线程Az作为主线程后,则其余线程为计算线程TH1={A1,A2,…,Aa}。任意一线程将对应有唯一的本地任务队列,则第一个线程A1对应的本地任务队列记为Q1(简称为第一个本地任务队列Q1),第二个线程A2对应的本地任务队列记为Q2(简称为第二个本地任务队列Q2),第z个线程Az对应的本地任务队列记为Qz(简称为第z个本地任务队列Qz,也称为主线程本地任务队列Qz),最后一个线程Aa对应的本地任务队列记为Qa(简称为最后一个本地任务队列Qa)。计算线程TH1={A1,A2,…,Aa}所对应的本地任务队列集合记为 Q T H 1 = { Q 1 , Q 2 , . . . , Q a } . In the present invention, the thread in stream processing module 1 is recorded as stream-thread TH 1 ={A 1 , A 2 ,...,A z ,...,A a } where A 1 represents the first thread in stream processing module 1 Thread, A 2 indicates the second thread in stream processing module 1, A z indicates the zth thread in stream processing module 1, A a indicates the last thread in stream processing module 1, and a indicates the thread in stream processing module 1 The thread identification number of , for the convenience of description, A a is also referred to as any thread in the following. When one thread A z in stream-thread TH 1 ={A 1 ,A 2 ,...,A z ,...,A a } is used as the main thread, the rest of the threads are computing threads TH 1 ={A 1 ,A 2 ,...,A a }. Any thread will correspond to a unique local task queue, then the local task queue corresponding to the first thread A 1 is recorded as Q 1 (abbreviated as the first local task queue Q 1 ), and the local task queue corresponding to the second thread A 2 The task queue is denoted as Q 2 (abbreviated as the second local task queue Q 2 ), and the local task queue corresponding to the zth thread A z is denoted as Q z (abbreviated as the zth local task queue Q z , also known as the main thread local task queue Q z ), and the local task queue corresponding to the last thread A a is denoted as Q a (referred to as the last local task queue Q a ). The local task queue set corresponding to computing thread TH 1 ={A 1 ,A 2 ,…,A a } is denoted as Q T h 1 = { Q 1 , Q 2 , . . . , Q a } .

在本发明中,状态处理模块2中的线程记为状态-线程TH2={B1,B2,…,Bb}中B1表示状态处理模块2中的第一个线程,B2表示状态处理模块2中的第二个线程,Bb表示状态处理模块2中的最后一个线程,b表示状态处理模块2中的线程标识号,为了方便说明,下文将Bb也称为任意一个线程。对于状态处理模块2中的状态对象STATEBase_state={S1,S2,…,Ss}是平均分配在状态-线程TH2={B1,B2,…,Bb}上的,则任意一状态-线程Bb上将处理多个访问任务队列,被所述的状态-线程Bb处理的访问任务队列记为 P state B b = { P 1 , P 2 , . . . , P s } , P state B b ∈ P state STAT E Base _ state ; 任意一访问任务队列Ps将对应有唯一一个线程BbIn the present invention, the threads in the state processing module 2 are recorded as state-thread TH 2 = {B 1 , B 2 ,..., B b } where B 1 represents the first thread in the state processing module 2, and B 2 represents The second thread in the state processing module 2, B b represents the last thread in the state processing module 2, b represents the thread identification number in the state processing module 2, for the convenience of description, B b is also referred to as any thread below . For the state object STATE Base_state = {S 1 , S 2 ,..., S s } in the state processing module 2 is evenly distributed on the state-thread TH 2 = {B 1 , B 2 ,..., B b }, then Any state-thread B b will process multiple access task queues, and the access task queues processed by the state-thread B b are recorded as P state B b = { P 1 , P 2 , . . . , P the s } , and P state B b ∈ P state STAT E. Base _ state ; Any access task queue P s will correspond to only one thread B b .

在本发明中,Openflow消息分配控制模块3中的线程记为消息-线程TH3={C1,C2,…,Cc}中C1表示Openflow消息分配控制模块3中的第一个线程,C2表示Openflow消息分配控制模块3中的第二个线程,Cc表示Openflow消息分配控制模块3中的最后一个线程,c表示Openflow消息分配控制模块3中的线程标识号,为了方便说明,下文将Cc也称为任意一个线程。 In the present invention, the threads in the Openflow message distribution control module 3 are recorded as message-thread TH 3 = {C 1 , C 2 ,..., C c } where C 1 represents the first thread in the Openflow message distribution control module 3 , C 2 represents the second thread in the Openflow message distribution control module 3, C c represents the last thread in the Openflow message distribution control module 3, and c represents the thread identification number in the Openflow message distribution control module 3, for convenience of description, Hereinafter, C c is also referred to as any thread.

Openflow消息分配控制模块3与多个Openflow交换机4的链接记为SV表示Openflow控制器,SW表示Openflow交换机集合,SW={D1,D2,…,Dd}中D1表示第一个Openflow交换机,D2表示第二个Openflow交换机,Dd表示最后一个Openflow交换机,d表示Openflow交换机的标识号,为了方便说明,下文将Dd也称为任意一个Openflow交换机。第一条链接记为第二条链接记为最后一条链接记为(也称为任意一一条链接), CON SW SV = { CON D 1 SV , CON D 2 SV , . . . , CON D d SV } . 对于Openflow消息分配控制模块3与Openflow交换机4之间的链接是平均分配在消息-线程TH3={C1,C2,…,Cc}上的,且任意一一条链接将对应有唯一一个线程CcThe link between Openflow message distribution control module 3 and multiple Openflow switches 4 is denoted as SV represents the Openflow controller, SW represents the set of Openflow switches, SW={D 1 , D 2 ,...,D d } where D 1 represents the first Openflow switch, D 2 represents the second Openflow switch, and D d represents the last An Openflow switch, and d represents an identification number of the Openflow switch. For convenience of description, D d is also referred to as any Openflow switch hereinafter. The first link is recorded as The second link is denoted as The last link is marked as (also known as any link ), CON SW SV = { CON D. 1 SV , CON D. 2 SV , . . . , CON D. d SV } . For the link between the Openflow message distribution control module 3 and the Openflow switch 4 is evenly distributed on the message-thread TH 3 ={C 1 ,C 2 ,…,C c }, and any link There will be only one thread C c corresponding to it.

(一)Openflow消息分配控制模块3 (1) Openflow message distribution control module 3

参见图1所示,Openflow消息分配控制模块3第一方面采用异步非阻塞IO模型从链接的接收缓冲区中接收Openflow交换机4发送的Openflow消息; Referring to Fig. 1, the Openflow message distribution control module 3 first adopts the asynchronous non-blocking IO model to receive the Openflow message sent by the Openflow switch 4 from the receiving buffer of the link;

所述Openflow消息中包括有Packet-in消息、Flow-Removed消息、Port-status消息和Error消息。 The Openflow message includes a Packet-in message, a Flow-Removed message, a Port-status message and an Error message.

Openflow消息分配控制模块3第二方面将流处理任务 TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } 发送到流处理模块1的主线程本地任务队列Qz中; The second aspect of the Openflow message distribution control module 3 will be the flow processing task TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Removed FLOW Base _ flow } Sent to the main thread local task queue Qz of stream processing module 1;

在本发明中,所述流处理任务 TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } 的获取是:(A)首先依据Packet-in消息触发Packet-in事件;然后根据Packet-in事件生成如表1中Base_flow结构的流对象FLOWBase_flow={F1,F2,…,Ff};最后根据Base_flow结构中start方法生成所述Packet-in事件对应的流处理任务(B)首先依据Flow-Removed消息触发Flow-Removed事件;然后根据Flow-Removed事件生成如表1中Base_flow结构的流对象FLOWBase_flow={F1,F2,…,Ff};最后根据Base_flow结构中start方法生成所述Flow-Removed事件对应的流处理任务 In the present invention, the stream processing task TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Removed FLOW Base _ flow } The acquisition is: (A) first trigger the Packet-in event according to the Packet-in message; then generate the flow object FLOW Base_flow with the Base_flow structure in Table 1 according to the Packet-in event Base_flow ={F 1 ,F 2 ,…,F f } ;Finally, generate the flow processing task corresponding to the Packet-in event according to the start method in the Base_flow structure (B) First trigger the Flow-Removed event according to the Flow-Removed message; then generate the flow object FLOW Base_flow with the Base_flow structure in Table 1 according to the Flow-Removed event; Base_flow = {F 1 , F 2 ,...,F f }; The start method in the structure generates the flow processing task corresponding to the Flow-Removed event

在本发明中,Base_flow结构的流对象FLOWBase_flow={F1,F2,…,Ff}中F1表示第一种类型的流对象,F2表示第二种类型的流对象,Ff表示最后一种类型的流对象,f为流对象的标识号,为了方便说明,下文将Ff也称为任意一种类型的流对象。 In the present invention, in the flow object FLOW Base_flow ={F 1 , F 2 ,...,F f } of the Base_flow structure, F 1 represents the first type of flow object, F 2 represents the second type of flow object, and F f Indicates the last type of flow object, and f is the identification number of the flow object. For the convenience of description, F f is also referred to as any type of flow object hereinafter.

表1Base_flow类 Table 1Base_flow class

Openflow消息分配控制模块3第三方面将状态处理任务 TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } 发送到状态处理模块2的访问任务队列 P state STAT E Base _ state = { P 1 , P 2 , . . . , P s } 中; Openflow message distribution control module 3 The third aspect will state processing tasks TASK 3 - 2 = { SA port - status STATE Base _ state , SA error STATE Base _ state } Sent to the access task queue of status processing module 2 P state STAT E. Base _ state = { P 1 , P 2 , . . . , P the s } middle;

在本发明中,所述状态处理任务 TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } 的获取是:(A)首先依据Port-status消息触发Port-status事件;然后根据Port-status事件生成针对如表2中Base_state结构的状态对象STATEBase_state={S1,S2,…,Ss}的处理任务,即Port-status状态处理任务记为(B)首先依据Error消息触发Error事件;然后根据Error事件生成针对如表2中Base_state结构的状态对象STATEBase_state={S1,S2,…,Ss}的处理任务,即Error状态处理任务记为 In the present invention, the state processing task TASK 3 - 2 = { SA port - status STATE Base _ state , SA error STATE Base _ state } The acquisition is: (A) First trigger the Port-status event according to the Port-status message; then generate the state object STATE Base_state ={S 1 ,S 2 ,…,S s for the Base_state structure in Table 2 according to the Port-status event }’s processing task, that is, the Port-status status processing task is recorded as (B) First trigger the Error event based on the Error message; then generate the processing task for the state object STATE Base_state = {S 1 , S 2 ,..., S s } of the Base_state structure in Table 2 according to the Error event, that is, the Error state processing task recorded as

表2Base_state类 Table 2Base_state class

Openflow消息分配控制模块3第四方面接收流处理模块1输出的controller-to-switch消息; The fourth aspect of the Openflow message distribution control module 3 receives the controller-to-switch message output by the stream processing module 1;

Openflow消息分配控制模块3第五方面采用异步非阻塞IO模型从消息-线程TH3={C1,C2,…,Cc}中所属的链接的发送缓冲区中向Openflow交换机4输出controller-to-switch消息。 The fifth aspect of the Openflow message distribution control module 3 adopts the asynchronous non-blocking IO model from the link in the message-thread TH 3 ={C 1 ,C 2 ,...,C c } output the controller-to-switch message to the Openflow switch 4 in the sending buffer of

(二)流处理模块1 (2) Stream processing module 1

参见图1所示,流处理模块1第一方面用于接收Openflow消息分配控制模块3输出的流处理任务 TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } ; Referring to Fig. 1, the first aspect of the stream processing module 1 is used to receive the stream processing tasks output by the Openflow message distribution control module 3 TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Removed FLOW Base _ flow } ;

第二方面将 TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } 保存到主线程本地任务队列Qz中; The second aspect will TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Removed FLOW Base _ flow } Save to the main thread local task queue Q z ;

第三方面将 TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } 通过轮询的方式发送到流处理模块1的计算线程本地任务队列中; The third party will TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Removed FLOW Base _ flow } Send to the computing thread local task queue of stream processing module 1 by polling middle;

第四方面执行 TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } 中的具体任务;并动态生成对流对象FLOWBase_flow={F1,F2,…,Ff}的处理任务,记为流对象子任务将所述添加到 Q T H 1 = { Q 1 , Q 2 , . . . , Q a } 中; The fourth aspect of implementation TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Removed FLOW Base _ flow } The specific task in; and dynamically generate the processing task of the convection object FLOW Base_flow = {F 1 , F 2 ,..., F f }, which is recorded as the flow object subtask will be described add to Q T h 1 = { Q 1 , Q 2 , . . . , Q a } middle;

第五方面执行 TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } 中的具体任务;并动态生成对状态对象STATEBase_state={S1,S2,…,Ss}的处理任务,记为状态对象子任务根据所述的的global属性(如表2第4行所示)的值,进行判断;如果global为true,则表示该状态为全局共享状态,则将所述的给予状态处理模块2,并等待状态处理模块2的任务完成消息STA2-1;反之,如果global不为true,则表示该状态为局部共享状态,则流处理模块1中的产生所述的线程直接执行; Implementation of the fifth aspect TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Removed FLOW Base _ flow } Specific tasks in ; and dynamically generate processing tasks for state objects STATE Base_state = {S 1 , S 2 ,..., S s }, recorded as state object subtasks according to the The value of the global attribute (as shown in the fourth row of Table 2) is judged; if the global is true, it means that the state is a global shared state, and the described Give the state processing module 2, and wait for the task of the state processing module 2 to complete the message STA 2-1 ; otherwise, if global is not true, then it means that the state is a local shared state, then the stream processing module 1 produces the described The thread executes directly;

第六方面流处理模块1中的计算线程通过任务窃取的方式,进行负载均衡。 In the sixth aspect, the computing threads in the stream processing module 1 perform load balancing by way of task stealing.

所述“任务窃取方式”的相关公开信息: Relevant public information about the "task stealing method":

文章名称:Schedulingmultithreadedcomputationsbyworkstealing Article name: Schedulingmultithreadedcomputationsbyworkstealing

作者:RobertD.BlumofeUniv.ofTexasatAustin,Austin; Author: Robert D. Blumofe Univ. of Texas at Austin, Austin;

CharlesE.LeisersonMITLabforComputerScience,Cambridge,MA; Charles E. Leiserson MIT Lab for Computer Science, Cambridge, MA;

发表信息:JournaloftheACM(JACM)JACMHomepagearchive Publication information: Journal of the ACM (JACM) JACM Homepage archive

Volume46Issue5,Sept.1999,Pages720-748,ACMNewYork,NY,USA。 Volume 46 Issue 5, Sept. 1999, Pages 720-748, ACM New York, NY, USA.

在本发明中,当流处理模块1接收到状态处理模块2输出的任务完成消息之后,首先判断是否存在计算线程等待该消息,如果存在,则令等待消息 STA 2 - 1 = { STA 2 - 1 B 1 , STA 2 - 1 B 2 , . . . , STA 2 - 1 B b } 的计算线程继续执行;否则,忽略该消息。 In the present invention, when the stream processing module 1 receives the task completion message output by the status processing module 2 After that, first judge whether there is a computing thread waiting for the message, and if so, make the waiting message STA 2 - 1 = { STA 2 - 1 B 1 , STA 2 - 1 B 2 , . . . , STA 2 - 1 B b } execution of the computation thread continues; otherwise, ignore the message.

流处理模块1第七方面输出controller-to-switch消息给Openflow消息分配模块3。在本发明中,计算线程将需要输出的controller-to-switch消息同步地写入到消息-线程TH3={C1,C2,…,Cc}中所属的链接的发送缓冲区中。 In the seventh aspect, the flow processing module 1 outputs the controller-to-switch message to the Openflow message distribution module 3 . In the present invention, the calculation thread synchronously writes the controller-to-switch message to be output to the link in the message-thread TH 3 ={C 1 ,C 2 ,...,C c } in the send buffer.

(三)状态处理模块2 (3) Status processing module 2

参见图1所示,状态处理模块2第一方面接收Openflow消息模块3发出的状态处理任务 TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } , 并将 TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } 保存到状态对象STATEBase_state={S1,S2,…,Ss}的访问任务队列中; Referring to Fig. 1, the state processing module 2 firstly receives the state processing task sent by the Openflow message module 3 TASK 3 - 2 = { SA port - status STATE Base _ state , SA error STATE Base _ state } , and will TASK 3 - 2 = { SA port - status STATE Base _ state , SA error STATE Base _ state } Save to the access task queue of the state object STATE Base_state = {S 1 ,S 2 ,…,S s } middle;

状态处理模块2第二方面接收流处理模块1发出的状态处理任务并将保存到状态对象STATEBase_state={S1,S2,…,Ss}的访问任务队列中; The second aspect of the status processing module 2 is to receive the status processing tasks sent by the stream processing module 1 and will Save to the access task queue of the state object STATE Base_state = {S 1 ,S 2 ,…,S s } middle;

状态处理模块2第三方面状态-线程TH2={B1,B2,…,Bb}中的B1中提取出属于B1的访问任务队列 P state B 1 = { P 1 , P 2 , . . . , P s } ; 然后B1通过轮询的方式执行 P state B 1 = { P 1 , P 2 , . . . , P s } 中的任务;当执行完成后,向流处理模块1发送的任务完成消息 State processing module 2 third aspect state - thread TH 2 = B 1 in {B 1 , B 2 ,..., B b } from Extract the access task queue belonging to B1 from P state B 1 = { P 1 , P 2 , . . . , P the s } ; Then B 1 executes by polling P state B 1 = { P 1 , P 2 , . . . , P the s } tasks in ; when execution completes After that, the task completion message sent to stream processing module 1

状态-线程TH2={B1,B2,…,Bb}中的B2中提取出属于B2的访问任务队列然后B2通过轮询的方式执行中的任务;当执行完成后,向流处理模块1发送的任务完成消息 State - Thread TH 2 = B 2 in {B 1 ,B 2 ,...,B b } starts from Extract the access task queue belonging to B2 from Then B 2 executes by polling tasks in ; when execution completes After that, the task completion message sent to stream processing module 1

状态-线程TH2={B1,B2,…,Bb}中的Bb中提取出属于Bb的访问任务队列然后Bb通过轮询的方式执行中的任务;当执行完成后,向流处理模块1发送的任务完成消息 State - Thread TH 2 = B b in {B 1 ,B 2 ,...,B b } starts from Extract the access task queue belonging to B b from Then B b executes by polling tasks in ; when execution completes After that, the task completion message sent to stream processing module 1

对于状态处理模块2第四方面向流处理模块1发送的任务完成消息集合记为 STA 2 - 1 = { STA 2 - 1 B 1 , STA 2 - 1 B 2 , . . . , STA 2 - 1 B b } . For the task completion message set sent by the state processing module 2 to the stream processing module 1 in the fourth aspect, it is denoted as STA 2 - 1 = { STA 2 - 1 B 1 , STA 2 - 1 B 2 , . . . , STA 2 - 1 B b } .

参见图2所示,采用本发明设计的基于Openflow的事件并行控制器进行Openflow事件的并行处理,其包括下列并行处理步骤: Referring to shown in Fig. 2, adopt the event parallel controller based on Openflow that the present invention designs to carry out the parallel processing of Openflow event, it comprises following parallel processing steps:

步骤一:Openflow消息的并行收发,触发对应Openflow事件 Step 1: Send and receive Openflow messages in parallel, and trigger corresponding Openflow events

在本发明的基于Openflow的事件并行控制器内每个交换机存在唯一的链接。 There is a unique link per switch within the Openflow-based event-parallel controller of the present invention.

进行链接建立过程中,消息-线程TH3={C1,C2,…,Cc}中的第一个线程C1负责对Openflow交换机SW={D1,D2,…,Dd}的链接请求进行监听。当收到链接请求后,建立链接 CON SW SV = { CON D 1 SV , CON D 2 SV , . . . , CON D d SV } , 并将链接 CON SW SV 平均地分配给消息-线程TH3。任意一一条链接由唯一一个线程Cc进行处理。 During the connection establishment process, the first thread C 1 in the message-thread TH 3 ={C 1 ,C 2 ,…,C c } is responsible for sending messages to the Openflow switch SW={D 1 ,D 2 ,…,D d } The link request is monitored. When a link request is received, the link is established CON SW SV = { CON D. 1 SV , CON D. 2 SV , . . . , CON D. d SV } , and will link CON SW SV Distribute evenly to message - thread TH3. any link Processed by only one thread C c .

Openflow消息接收过程中,消息-线程TH3={C1,C2,…,Cc}采用异步非阻塞IO模型从链接的接收缓冲区中接收Openflow交换机SW={D1,D2,…,Dd}发送的Openflow消息。依据Packet-in消息触发Packet-in事件;依据Flow-Removed消息触发Flow-Removed事件;依据Port-status消息触发Port-status事件;依据Error消息触发Error事件。 In the process of receiving Openflow messages, the message-thread TH 3 ={C 1 ,C 2 ,...,C c } adopts the asynchronous non-blocking IO model to receive the Openflow switch SW={D 1 ,D 2 ,... ,D d } The Openflow message sent. Trigger the Packet-in event based on the Packet-in message; trigger the Flow-Removed event based on the Flow-Removed message; trigger the Port-status event based on the Port-status message; trigger the Error event based on the Error message.

Openflow消息发送过程中,消息-线程TH3={C1,C2,…,Cc}采用异步非阻塞IO模型从消息-线程TH3={C1,C2,…,Cc}中所属的链接的发送缓冲区中向Openflow交换机SW={D1,D2,…,Dd}输出controller-to-switch消息。对链接的发送缓冲区的操作需要进行同步。 During the Openflow message sending process, the message-thread TH 3 ={C 1 ,C 2 ,…,C c } adopts the asynchronous non-blocking IO model from the message-thread TH 3 ={C 1 ,C 2 ,…,C c } link to belong output the controller-to-switch message to the Openflow switch SW={D 1 , D 2 , . . . , D d } in the sending buffer of . link to The operation of the send buffer needs to be synchronized.

在本发明中,Openflow消息的并行收发利用了多个消息-线程并行地对Openflow消息进行收发,每个Openflow交换机链接由唯一的消息-线程进行处理,消息-线程之间不存在互斥。因此,可以最大限度地提高Openflow消息收发效率。另外,利用异步非阻塞IO模型能够减小消息收发过程与消息处理过程的干扰,从而进一步提高了消息收发效率。 In the present invention, the parallel sending and receiving of Openflow messages utilizes multiple message-threads to send and receive Openflow messages in parallel, each Openflow switch link is processed by a unique message-thread, and there is no mutual exclusion between message-threads. Therefore, the efficiency of Openflow messaging can be maximized. In addition, using the asynchronous non-blocking IO model can reduce the interference between the message sending and receiving process and the message processing process, thereby further improving the efficiency of message sending and receiving.

步骤二:Openflow事件的并行处理 Step 2: Parallel processing of Openflow events

针对Packet-in事件和Flow-Removed事件,首先生成如表1中Base_flow结构的流对象FLOWBase_flow={F1,F2,…,Ff},然后根据Base_flow结构中start方法生成流处理任务 TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } , 最后将该任务 TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } 发送到流处理模块1中的主线程本地任务队列Qz中; For the Packet-in event and Flow-Removed event, first generate the flow object FLOW Base_flow ={F 1 ,F 2 ,…,F f } as shown in the Base_flow structure in Table 1, and then generate the flow processing task according to the start method in the Base_flow structure TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Removed FLOW Base _ flow } , Finally the task TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Removed FLOW Base _ flow } Sent to the main thread local task queue Q z in stream processing module 1;

针对流处理任务 TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } 在流处理模块1中,主线程Az TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } 通过轮询的方式发送到流处理模块1的计算线程本地任务队列中;计算线程TH1={A1,A2,…,Aa}执行 TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Remved FLOW Base _ flow } 中的具体任务;并动态生成流对象子任务将所述添加到 Q T H 1 = { Q 1 , Q 2 , . . . , Q a } 中; For stream processing tasks TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Removed FLOW Base _ flow } In stream processing module 1, the main thread A z will TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Removed FLOW Base _ flow } Send to the computing thread local task queue of stream processing module 1 by polling middle; calculation thread TH 1 ={A 1 ,A 2 ,...,A a } execute TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Remved FLOW Base _ flow } Concrete tasks in ; and dynamically generate flow object subtasks will be described add to Q T h 1 = { Q 1 , Q 2 , . . . , Q a } middle;

计算线程TH1={A1,A2,…,Aa}执行 TASK 3 - 1 = { FA Packet - in FLOW Base _ flow , FA Flow - Removed FLOW Base _ flow } 中的具体任务;并动态生成状态对象子任务根据所述的的global属性(如表2第4行所示)的值,进行判断;如果global为true,则表示该状态为全局共享状态,则将所述的给予状态处理模块2,并等待状态处理模块2的任务完成消息STA2-1;反之,如果global不为true,则表示该状态为局部共享状态,则流处理模块1中的产生所述的线程直接执行;流处理模块1中的计算线程通过任务窃取的方式,进行负载均衡。 Calculation thread TH 1 ={A 1 ,A 2 ,...,A a } executes TASK 3 - 1 = { FA Packets - in FLOW Base _ flow , FA flow - Removed FLOW Base _ flow } Specific tasks in ; and dynamically generate state object subtasks according to the The value of the global attribute (as shown in the fourth row of Table 2) is judged; if the global is true, it means that the state is a global shared state, and the described Give the state processing module 2, and wait for the task of the state processing module 2 to complete the message STA 2-1 ; otherwise, if global is not true, then it means that the state is a local shared state, then the stream processing module 1 produces the described The threads in the stream processing module 1 are directly executed; the computing threads in the stream processing module 1 perform load balancing by means of task stealing.

针对Port-status事件和Error事件,首先生成针对如表2中Base_state结构的状态对象STATEBase_state={S1,S2,…,Ss}的处理任务 TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } , 然后将该任务 TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } 发送到状态处理模块2的访问任务队列 P state STATE Base _ state = { P 1 , P 2 , . . . , P s } 中。 For the Port-status event and the Error event, first generate a processing task for the state object STATE Base_state ={S 1 , S 2 ,..., S s } of the Base_state structure in Table 2 TASK 3 - 2 = { SA port - status STATE Base _ state , SA error STATE Base _ state } , then the task TASK 3 - 2 = { SA port - status STATE Base _ state , SA error STATE Base _ state } Sent to the access task queue of status processing module 2 P state STATE Base _ state = { P 1 , P 2 , . . . , P the s } middle.

针对任务 TASK 3 - 2 = { SA Port - status STATE Base _ state , SA Error STATE Base _ state } 和任务 TASK STATE Base _ state sub , 在状态处理模块2中,状态-线程TH2={B1,B2,…,Bb}中的B1中提取出属于B1的访问任务队列 P state B 1 = { P 1 , P 2 , . . . , P s } ; 然后B1通过轮询的方式执行 P state B 1 = { P 1 , P 2 , . . . , P s } ; 中的任务;当执行完成后,向流处理模块1发送的任务完成消息状态-线程TH2={B1,B2,…,Bb}中的B2中提取出属于B2的访问任务队列然后B2通过轮询的方式执行中的任务;当执行完成后,向流处理模块1发送的任务完成消息状态-线程TH2={B1,B2,…,Bb}中的Bb中提取出属于Bb的访问任务队列 P state B b = { P 1 , P 2 , . . . , P s } ; 然后Bb通过轮询的方式执行 P state B b = { P 1 , P 2 , . . . , P s } 中的任务;当执行完成后,向流处理模块1发送的任务完成消息 For the task TASK 3 - 2 = { SA port - status STATE Base _ state , SA error STATE Base _ state } and tasks TASK STATE Base _ state sub , In state processing module 2, B 1 in state-thread TH 2 ={B 1 ,B 2 ,...,B b } starts from Extract the access task queue belonging to B1 from P state B 1 = { P 1 , P 2 , . . . , P the s } ; Then B 1 executes by polling P state B 1 = { P 1 , P 2 , . . . , P the s } ; tasks in ; when execution completes After that, the task completion message sent to stream processing module 1 State - Thread TH 2 = B 2 in {B 1 ,B 2 ,...,B b } starts from Extract the access task queue belonging to B2 from Then B 2 executes by polling tasks in ; when execution completes After that, the task completion message sent to stream processing module 1 State - Thread TH 2 = B b in {B 1 ,B 2 ,...,B b } starts from Extract the access task queue belonging to B b from P state B b = { P 1 , P 2 , . . . , P the s } ; Then B b executes by polling P state B b = { P 1 , P 2 , . . . , P the s } tasks in ; when execution completes After that, the task completion message sent to stream processing module 1

在本发明中,通过接收Openflow消息后,触发对应的Openflow事件,并根据事件的类型产生对流对象和状态对象的处理任务,交由不同的线程进行并行处理。在流事件处理过程中,能够动态产生子任务,利用任务窃取的方式由多个计算线程并行地处理,提高流事件处理效率。控制器中每一个共享状态由唯一的状态线程进行处理,不仅简化了对共享状态的访问,而且可以在一定程度上提高计算线程对流事件的处理效率。利用本发明中的事件并行处理方法可以使Openflow控制器具有更好的性能可扩展性。 In the present invention, after receiving the Openflow message, the corresponding Openflow event is triggered, and the processing tasks of the convection object and the state object are generated according to the type of the event, and are handed over to different threads for parallel processing. In the process of stream event processing, subtasks can be dynamically generated, and processed by multiple computing threads in parallel by using task stealing to improve the efficiency of stream event processing. Each shared state in the controller is processed by a unique state thread, which not only simplifies the access to the shared state, but also improves the processing efficiency of the computing thread for streaming events to a certain extent. Utilizing the event parallel processing method in the present invention can make the Openflow controller have better performance scalability.

验证实施例Verification example

基于Openflow消息的事件并行处理系统POCA,在执行计算量小的switch程序时,相比其他Openflow控制器,随着线程个数的增多,当线程个数大于8时,POCA具有更高的加速比。原因是每个IO线程处理各自所属的交换机链接,相互之间不存在干扰,因此提高了处理性能。请参见图3所示的基于switch程序的加速比对比图。 POCA, an event parallel processing system based on Openflow messages, when executing a switch program with a small amount of calculation, compared with other Openflow controllers, with the increase in the number of threads, when the number of threads is greater than 8, POCA has a higher speedup ratio . The reason is that each IO thread processes the switch link to which it belongs, and there is no interference with each other, thus improving the processing performance. Please refer to the comparison chart of the speedup ratio based on the switch program shown in Figure 3.

参见图4所示的基于QPAS算法的加速比对比图。在处理计算量较大的QPAS算法时,随着线程个数的增多,与NOX相比,POCA具有更高的加速比。原因在于:POCA在事件处理过程内部通过计算线程进行并行加速,提高了每个事件的处理效率,进而提高了整体的处理效率。 See Figure 4 for the comparison of speedup ratios based on the QPAS algorithm. When dealing with the computationally intensive QPAS algorithm, POCA has a higher speedup ratio than NOX as the number of threads increases. The reason is: POCA uses computing threads to perform parallel acceleration within the event processing process, which improves the processing efficiency of each event, thereby improving the overall processing efficiency.

本发明公开了一种基于Openflow的事件并行控制器及其事件并行处理方法,该方法将Openflow消息的收发与Openflow事件的处理相分离,利用额外的计算线程对Openflow事件处理进行加速。应用开启后的控制器将建立与交换机的链接,并将链接平均地分给多个I/O线程,每个链接上消息的收发由唯一的I/O线程处理。应用在接收Openflow消息后,触发对应的Openflow事件,并根据事件的类型产生对流对象和状态对象的处理任务,交由不同的线程进行处理。在流事件处理过程中,能够动态产生子任务,并由多个线程并行执行。对共享状态,使用唯一的状态线程进行处理。本发明方法相对于现有的Openflow事件的并行处理方法具有更好的性能可扩展性、以及更简单的数据访问方式。 The invention discloses an Openflow-based event parallel controller and an event parallel processing method thereof. The method separates the sending and receiving of Openflow messages from the processing of Openflow events, and uses extra computing threads to accelerate the processing of Openflow events. After the application is started, the controller will establish a link with the switch, and evenly distribute the link to multiple I/O threads, and the sending and receiving of messages on each link will be handled by a unique I/O thread. After the application receives the Openflow message, it triggers the corresponding Openflow event, and generates the processing tasks of the flow object and the state object according to the type of the event, which are handed over to different threads for processing. During stream event processing, subtasks can be dynamically generated and executed in parallel by multiple threads. For shared state, use a unique state thread for processing. Compared with the existing Openflow event parallel processing method, the method of the present invention has better performance scalability and simpler data access mode.

Claims (4)

1.一种基于Openflow的事件并行控制器,其特征在于:该控制器包括有流处理模块(1)、状态处理模块(2)和Openflow消息分配控制模块(3);1. A parallel event controller based on Openflow, characterized in that: the controller includes a flow processing module (1), a state processing module (2) and an Openflow message distribution control module (3); Openflow消息分配控制模块(3)第一方面采用异步非阻塞IO模型从链接的接收缓冲区中接收Openflow交换机(4)发送的Openflow消息;所述Openflow消息中包括有Packet-in消息、Flow-Removed消息、Port-status消息和Error消息;The Openflow message distribution control module (3) firstly adopts the asynchronous non-blocking IO model to receive the Openflow message sent by the Openflow switch (4) from the receiving buffer of the link; the Openflow message includes Packet-in message, Flow-Removed Message, Port-status message and Error message; Packet-in消息是指流到达消息;The Packet-in message refers to the flow arrival message; Flow-Removed消息是指流移除消息;The Flow-Removed message refers to the flow removal message; Port-status消息是指端口状态消息;The Port-status message refers to a port status message; Error消息是指错误消息;Error message refers to an error message; Openflow消息分配控制模块(3)第二方面将流处理任务 TASK 3 - 1 = { FA P a c k e t - i n FLOW B a s e - f l o w , FA F l o w - Re m o v e d FLOW B a s e _ f l o w } 发送到流处理模块(1)的主线程本地任务队列Qz中;表示Packet-in事件对应的流处理任务;表示Flow-Removed事件对应的流处理任务;Openflow message distribution control module (3) The second aspect is to process the task of flow TASK 3 - 1 = { FA P a c k e t - i no FLOW B a the s e - f l o w , FA f l o w - Re m o v e d FLOW B a the s e _ f l o w } Sent to the main thread local task queue Q z of the stream processing module (1); Indicates the stream processing task corresponding to the Packet-in event; Indicates the flow processing task corresponding to the Flow-Removed event; 所述流处理任务 TASK 3 - 1 = { FA P a c k e t - i n FLOW B a s e _ f l o w , FA F l o w - Re m o v e d FLOW B a s e _ f l o w } 的获取是:(A)首先依据Packet-in消息触发Packet-in事件;然后根据Packet-in事件生成Base_flow结构的流对象FLOWBase_flow={F1,F2,…,Ff};最后根据Base_flow结构中start方法生成所述Packet-in事件对应的流处理任务(B)首先依据Flow-Removed消息触发Flow-Removed事件;然后根据Flow-Removed事件生成如表1中Base_flow结构的流对象FLOWBase_flow={F1,F2,…,Ff};最后根据Base_flow结构中start方法生成所述Flow-Removed事件对应的流处理任务 The stream processing task TASK 3 - 1 = { FA P a c k e t - i no FLOW B a the s e _ f l o w , FA f l o w - Re m o v e d FLOW B a the s e _ f l o w } The acquisition is: (A) first trigger the Packet-in event according to the Packet-in message; then generate the flow object FLOW Base_flow of the Base_flow structure according to the Packet-in event; Base_flow = {F 1 , F 2 ,...,F f }; The start method in the structure generates the stream processing task corresponding to the Packet-in event (B) First trigger the Flow-Removed event according to the Flow-Removed message; then generate the flow object FLOW Base_flow ={F 1 , F 2 ,...,F f } according to the Base_flow structure in Table 1 according to the Flow-Removed event; The start method in the structure generates the flow processing task corresponding to the Flow-Removed event Base_flow结构的流对象FLOWBase_flow={F1,F2,…,Ff}中F1表示第一种类型的流对象,F2表示第二种类型的流对象,Ff表示最后一种类型的流对象,f为流对象的标识号;The flow object of the Base_flow structure FLOW Base_flow ={F 1 ,F 2 ,...,F f } where F 1 represents the first type of flow object, F 2 represents the second type of flow object, and F f represents the last type The stream object of , f is the identification number of the stream object; 表1Base_flow类Table 1Base_flow class Openflow消息分配控制模块(3)第三方面将状态处理任务 TASK 3 - 2 = { SA P o r t - s t a t u s STATE B a s e _ s t a t e , SA E r r o r STATE B a s e _ s t a t e } 发送到状态处理模块(2)的访问任务队列 P s t a t e STATE B a s e _ s t a t e = { P 1 , P 2 , ... , P s } 中;表示Port-status状态处理任务;表示Error状态处理任务;Openflow message distribution control module (3) The third aspect will be the state processing task TASK 3 - 2 = { SA P o r t - the s t a t u the s STATE B a the s e _ the s t a t e , SA E. r r o r STATE B a the s e _ the s t a t e } Sent to the access task queue of the status processing module (2) P the s t a t e STATE B a the s e _ the s t a t e = { P 1 , P 2 , ... , P the s } middle; Indicates the Port-status status processing task; Indicates the Error state processing task; 所述状态处理任务 TASK 3 - 2 = { SA P o r t - s t a t u s ‾ STATE B a s e _ s t a t e , SA E r r o r STATE B a s e _ s t a t e } 的获取是:(A)首先依据Port-status消息触发Port-status事件;然后根据Port-status事件生成Base_state结构的状态对象STATEBase_state={S1,S2,…,Ss}的处理任务,即Port-status状态处理任务记为(B)首先依据Error消息触发Error事件;然后根据Error事件生成针对如表2中Base_state结构的状态对象STATEBase_state={S1,S2,…,Ss}的处理任务,即Error状态处理任务记为 The state processing task TASK 3 - 2 = { SA P o r t - the s t a t u the s ‾ STATE B a the s e _ the s t a t e , SA E. r r o r STATE B a the s e _ the s t a t e } The acquisition is: (A) first trigger the Port-status event according to the Port-status message; then generate the processing task of the state object STATE Base_state ={S 1 , S 2 ,..., S s } of the Base_state structure according to the Port-status event, That is, the Port-status status processing task is recorded as (B) First trigger the Error event according to the Error message; then generate the processing task for the state object STATE Base_state ={S 1 , S 2 ,..., S s } of the Base_state structure as in Table 2 according to the Error event, that is, the Error state processing task recorded as Base_state结构的状态对象STATEBase_state={S1,S2,…,Ss}中S1表示第一种类型的状态对象,S2表示第二种类型的状态对象,Ss表示最后一种类型的状态对象,s表示状态对象的标识号;The state object of the Base_state structure STATE Base_state = {S 1 , S 2 ,..., S s } where S 1 represents the first type of state object, S 2 represents the second type of state object, and S s represents the last type The state object of , s represents the identification number of the state object; 表2Base_state类Table 2Base_state class Openflow消息分配控制模块(3)第四方面接收流处理模块(1)输出的controller-to-switch消息;The fourth aspect of the Openflow message distribution control module (3) receives the controller-to-switch message output by the stream processing module (1); Openflow消息分配控制模块(3)第五方面采用异步非阻塞IO模型从消息-线程TH3={C1,C2,…,Cc}中所属的链接的发送缓冲区中向Openflow交换机(4)输出controller-to-switch消息;Openflow message distribution control module (3) The fifth aspect adopts the asynchronous non-blocking IO model from the message-thread TH 3 ={C 1 ,C 2 ,...,C c } to which the link belongs Output the controller-to-switch message to the Openflow switch (4) in the sending buffer; 消息-线程TH3={C1,C2,…,Cc}中C1表示Openflow消息分配控制模块(3)中的第一个线程,C2表示Openflow消息分配控制模块(3)中的第二个线程,Cc表示Openflow消息分配控制模块(3)中的最后一个线程,c表示Openflow消息分配控制模块(3)中的线程标识号;Message-Thread TH 3 = {C 1 , C 2 ,..., C c } C 1 represents the first thread in the Openflow message distribution control module (3), and C 2 represents the first thread in the Openflow message distribution control module (3) Second thread, C represents the last thread in the Openflow message distribution control module (3), and c represents the thread identification number in the Openflow message distribution control module (3); 流处理模块(1)第一方面用于接收Openflow消息分配控制模块(3)输出的流处理任务 TASK 3 - 1 = { FA P a c k e t - i n FLOW B a s e _ f l o w , FA F l o w - Re m o v e d FLOW B a s e _ f l o w } ; The first aspect of the stream processing module (1) is used to receive the stream processing tasks output by the Openflow message distribution control module (3) TASK 3 - 1 = { FA P a c k e t - i no FLOW B a the s e _ f l o w , FA f l o w - Re m o v e d FLOW B a the s e _ f l o w } ; 流处理模块(1)第二方面将 TASK 3 - 1 = { FA P a c k e t - i n FLOW B a s e _ f l o w , FA F l o w - Re m o v e d FLOW B a s e _ f l o w } 保存到主线程本地任务队列Qz中;The second aspect of the stream processing module (1) will TASK 3 - 1 = { FA P a c k e t - i no FLOW B a the s e _ f l o w , FA f l o w - Re m o v e d FLOW B a the s e _ f l o w } Save to the main thread local task queue Q z ; 流处理模块(1)第三方面将 TASK 3 - 1 = { FA P a c k e t - i n FLOW B a s e _ f l o w , FA F l o w - Re m o v e d FLOW B a s e _ f l o w } 通过轮询的方式发送到计算线程本地任务队列中;The third aspect of the stream processing module (1) will be TASK 3 - 1 = { FA P a c k e t - i no FLOW B a the s e _ f l o w , FA f l o w - Re m o v e d FLOW B a the s e _ f l o w } Send to the computing thread local task queue by polling middle; 计算线程TH1={A1,A2,…,Aa}所对应的本地任务队列集合记为流-线程TH1={A1,A2,…,Az,…,Aa}中A1表示流处理模块(1)中的第一个线程,A2表示流处理模块(1)中的第二个线程,Az表示流处理模块(1)中的第z个线程,Aa表示流处理模块(1)中的最后一个线程,a表示流处理模块(1)中的线程标识号;第一个线程A1对应的本地任务队列记为Q1,第二个线程A2对应的本地任务队列记为Q2,第z个线程Az对应的本地任务队列记为Qz,最后一个线程Aa对应的本地任务队列记为QaThe local task queue set corresponding to computing thread TH 1 ={A 1 ,A 2 ,…,A a } is denoted as Stream-Thread TH 1 = {A 1 ,A 2 ,...,A z ,...,A a } where A 1 represents the first thread in the stream processing module (1), and A 2 represents the thread in the stream processing module (1). A z represents the zth thread in the stream processing module (1), A a represents the last thread in the stream processing module (1), and a represents the thread identification number in the stream processing module (1) ; The local task queue corresponding to the first thread A 1 is marked as Q 1 , the local task queue corresponding to the second thread A 2 is marked as Q 2 , the local task queue corresponding to the zth thread A z is marked as Q z , and finally The local task queue corresponding to a thread A a is denoted as Q a ; 流处理模块(1)第四方面执行 TASK 3 - 1 = { FA P a c k e t - i n FLOW B a s e _ f l o w , FA F l o w - Re m o v e d FLOW B a s e _ f l o w } 中的具体任务;并动态生成对流对象FLOWBase_flow={F1,F2,…,Ff}的处理任务,记为流对象子任务将所述添加到 Q TH 1 = { Q 1 , Q 2 , ... , Q a } 中;The fourth aspect of stream processing module (1) execution TASK 3 - 1 = { FA P a c k e t - i no FLOW B a the s e _ f l o w , FA f l o w - Re m o v e d FLOW B a the s e _ f l o w } The specific task in; and dynamically generate the processing task of the convection object FLOW Base_flow = {F 1 , F 2 ,..., F f }, which is recorded as the flow object subtask will be described add to Q TH 1 = { Q 1 , Q 2 , ... , Q a } middle; 流处理模块(1)第五方面执行 TASK 3 - 1 = { FA P a c k e t - i n FLOW B a s e _ f l o w , FA F l o w - Re m o v e d FLOW B a s e _ f l o w } 中的具体任务;并动态生成对状态对象STATEBase_state={S1,S2,…,Ss}的处理任务,记为状态对象子任务根据所述的的global属性的值,进行判断;如果global为true,则表示该状态为全局共享状态,则将所述的给予状态处理模块(2),并等待状态处理模块(2)的任务完成消息STA2-1;反之,如果global不为true,则表示该状态为局部共享状态,则流处理模块(1)中的产生所述的线程直接执行;The fifth aspect of stream processing module (1) execution TASK 3 - 1 = { FA P a c k e t - i no FLOW B a the s e _ f l o w , FA f l o w - Re m o v e d FLOW B a the s e _ f l o w } Specific tasks in ; and dynamically generate processing tasks for state objects STATE Base_state = {S 1 , S 2 ,..., S s }, recorded as state object subtasks according to the The value of the global attribute is judged; if global is true, it means that the state is a global shared state, and the described Give the state processing module (2), and wait for the task of the state processing module (2) to complete the message STA 2-1 ; otherwise, if global is not true, it means that the state is a local shared state, then in the stream processing module (1) the generation of The thread executes directly; Base_state结构的状态对象STATEBase_state={S1,S2,…,Ss}中S1表示第一种类型的状态对象,S2表示第二种类型的状态对象,Ss表示最后一种类型的状态对象,s表示状态对象的标识号;The state object of the Base_state structure STATE Base_state = {S 1 , S 2 ,..., S s } where S 1 represents the first type of state object, S 2 represents the second type of state object, and S s represents the last type The state object of , s represents the identification number of the state object; 流处理模块(1)第六方面通过任务窃取的方式进行计算线程的任务负载均衡;The sixth aspect of the stream processing module (1) performs task load balancing of computing threads by means of task stealing; 流处理模块(1)第七方面输出controller-to-switch消息给Openflow消息分配模块(3);计算线程将需要输出的controller-to-switch消息同步地写入到消息-线程TH3={C1,C2,…,Cc}中所属的链接的发送缓冲区中;The seventh aspect of the stream processing module (1) outputs the controller-to-switch message to the Openflow message distribution module (3); the calculation thread writes the controller-to-switch message to be output synchronously into the message-thread TH 3 ={C 1 ,C 2 ,…,C c } belong to the link in the send buffer; TH3={C1,C2,…,Cc}中C1表示Openflow消息分配控制模块(3)中的第一个线程,C2表示Openflow消息分配控制模块(3)中的第二个线程,Cc表示Openflow消息分配控制模块(3)中的最后一个线程,c表示Openflow消息分配控制模块(3)中的线程标识号;In TH 3 ={C 1 , C 2 ,...,C c }, C 1 represents the first thread in the Openflow message distribution control module (3), and C 2 represents the second thread in the Openflow message distribution control module (3). Thread, C c represents the last thread in the Openflow message distribution control module (3), and c represents the thread identification number in the Openflow message distribution control module (3); 状态处理模块(2)第一方面接收Openflow消息模块(3)发出的状态处理任务 TASK 3 - 2 = { SA P o r t - s t a t u s STATE B a s e _ s t a t e , SA E r r o r STATE B a s e _ s t a t e } , 并将 TASK 3 - 2 = { SA P o r t - s t a t u s STATE B a s e _ s t a t e , SA E r r o r STATE B a s e _ s t a t e } 保存到状态对象STATEBase_state={S1,S2,…,Ss}的访问任务队列中;The state processing module (2) first receives the state processing task sent by the Openflow message module (3) TASK 3 - 2 = { SA P o r t - the s t a t u the s STATE B a the s e _ the s t a t e , SA E. r r o r STATE B a the s e _ the s t a t e } , and will TASK 3 - 2 = { SA P o r t - the s t a t u the s STATE B a the s e _ the s t a t e , SA E. r r o r STATE B a the s e _ the s t a t e } Save to the access task queue of the state object STATE Base_state = {S 1 ,S 2 ,…,S s } middle; 访问任务队列中第一种类型的状态对象S1对应的访问任务队列记为P1,第二种类型的状态对象S2对应的访问任务队列记为P2,最后一种类型的状态对象Ss对应的访问任务队列记为Psaccess task queue The access task queue corresponding to the first type of state object S 1 is marked as P 1 , the access task queue corresponding to the second type of state object S 2 is marked as P 2 , and the last type of state object S s corresponds to The access task queue is denoted as P s ; 状态处理模块(2)第二方面接收流处理模块(1)发出的状态处理任务并将保存到状态对象STATEBase_state={S1,S2,…,Ss}的访问任务队列中;The second aspect of the status processing module (2) receives the status processing task sent by the stream processing module (1) and will Save to the access task queue of the state object STATE Base_state = {S 1 ,S 2 ,…,S s } middle; 状态处理模块(2)第三方面状态-线程TH2={B1,B2,…,Bb}中的B1中提取出属于B1的访问任务队列 P s t a t e B 1 = { P 1 , P 2 , ... , P s } , P s t a t e B 1 ∈ P s t a t e STATE B a s e _ s t a t e ; 然后B1通过轮询的方式执行中的任务;当执行完成后,向流处理模块(1)发送的任务完成消息 State processing module (2) third aspect state - thread TH 2 = B 1 in {B 1 , B 2 ,..., B b } from Extract the access task queue belonging to B1 from P the s t a t e B 1 = { P 1 , P 2 , ... , P the s } , and P the s t a t e B 1 ∈ P the s t a t e STATE B a the s e _ the s t a t e ; Then B 1 executes by polling tasks in ; when execution completes After that, the task completion message sent to the stream processing module (1) 状态-线程TH2={B1,B2,…,Bb}中B1表示状态处理模块(2)中的第一个线程,B2表示状态处理模块(2)中的第二个线程,Bb表示状态处理模块(2)中的最后一个线程,b表示状态处理模块(2)中的线程标识号;State-Thread TH 2 = {B 1 , B 2 ,..., B b } where B 1 represents the first thread in the state processing module (2), and B 2 represents the second thread in the state processing module (2) , B represents the last thread in the state processing module (2), and b represents the thread identification number in the state processing module (2); 状态-线程TH2={B1,B2,…,Bb}中的B2中提取出属于B2的访问任务队列 P s t a t e B 2 = { P 1 , P 2 , ... , P s } , P s t a t e B 2 ∈ P s t a t e STATE B a s e _ s t a t e ; 然后B2通过轮询的方式执行中的任务;当执行完成后,向流处理模块(1)发送的任务完成消息 State - Thread TH 2 = B 2 in {B 1 ,B 2 ,...,B b } starts from Extract the access task queue belonging to B2 from P the s t a t e B 2 = { P 1 , P 2 , ... , P the s } , and P the s t a t e B 2 ∈ P the s t a t e STATE B a the s e _ the s t a t e ; Then B 2 executes by polling tasks in ; when execution completes After that, the task completion message sent to the stream processing module (1) 状态-线程TH2={B1,B2,…,Bb}中的Bb中提取出属于Bb的访问任务队列 P s t a t e B b = { P 1 , P 2 , ... , P s } , P s t a t e B b ∈ P s t a t e STATE B a s e _ s t a t e ; 然后Bb通过轮询的方式执行中的任务;当执行完成后,向流处理模块(1)发送的任务完成消息 State - Thread TH 2 = B b in {B 1 ,B 2 ,...,B b } starts from Extract the access task queue belonging to B b from P the s t a t e B b = { P 1 , P 2 , ... , P the s } , and P the s t a t e B b ∈ P the s t a t e STATE B a the s e _ the s t a t e ; Then B b executes by polling tasks in ; when execution completes After that, the task completion message sent to the stream processing module (1) 对于状态处理模块(2)第四方面向流处理模块(1)发送的任务完成消息集合记为 STA 2 - 1 = { STA 2 - 1 B 1 , STA 2 - 1 B 2 , ... , STA 2 - 1 B b } . For the task completion message set sent by the state processing module (2) to the stream processing module (1) in the fourth aspect, it is denoted as STA 2 - 1 = { STA 2 - 1 B 1 , STA 2 - 1 B 2 , ... , STA 2 - 1 B b } . 2.根据权利要求1所述的基于Openflow的事件并行控制器,其特征在于:该控制器与现有Openflow控制器配合使用,且内嵌在Openflow网络体系结构中。2. The event parallel controller based on Openflow according to claim 1, characterized in that: the controller is used in conjunction with the existing Openflow controller and is embedded in the Openflow network architecture. 3.依据权利要求1所述的基于Openflow的事件并行控制器进行的事件并行处理方法,其特征在于有下列步骤:3. according to the event parallel processing method that the event parallel controller based on Openflow according to claim 1 carries out, it is characterized in that having the following steps: 步骤一:Openflow消息的并行收发,触发对应Openflow事件Step 1: Send and receive Openflow messages in parallel, and trigger corresponding Openflow events 基于Openflow的事件并行控制器内每个交换机存在唯一的链接;There is a unique link for each switch in the Openflow-based event parallel controller; 进行链接建立过程中,消息-线程TH3={C1,C2,…,Cc}中的第一个线程C1负责对Openflow交换机SW={D1,D2,…,Dd}的链接请求进行监听;当收到链接请求后,建立链接 CON S W S V = { CON D 1 S V , CON D 2 S V , ... , CON D d S V } , 并将链接平均地分配给消息-线程TH3;任意一一条链接由唯一一个线程Cc进行处理;During the connection establishment process, the first thread C 1 in the message-thread TH 3 ={C 1 ,C 2 ,…,C c } is responsible for sending messages to the Openflow switch SW={D 1 ,D 2 ,…,D d } The link request is monitored; when the link request is received, the link is established CON S W S V = { CON D. 1 S V , CON D. 2 S V , ... , CON D. d S V } , and will link Distribute evenly to messages - thread TH 3 ; any link Processed by the only thread C c ; Openflow消息接收过程中,消息-线程TH3={C1,C2,…,Cc}采用异步非阻塞IO模型从链接的接收缓冲区中接收Openflow交换机SW={D1,D2,…,Dd}发送的Openflow消息;依据Packet-in消息触发Packet-in事件;依据Flow-Removed消息触发Flow-Removed事件;依据Port-status消息触发Port-status事件;依据Error消息触发Error事件;In the process of receiving Openflow messages, the message-thread TH 3 ={C 1 ,C 2 ,...,C c } adopts the asynchronous non-blocking IO model to receive the Openflow switch SW={D 1 ,D 2 ,... , D d } the Openflow message sent; trigger the Packet-in event according to the Packet-in message; trigger the Flow-Removed event according to the Flow-Removed message; trigger the Port-status event according to the Port-status message; trigger the Error event according to the Error message; Openflow消息发送过程中,消息-线程TH3={C1,C2,…,Cc}采用异步非阻塞IO模型从消息-线程TH3={C1,C2,…,Cc}中所属的链接的发送缓冲区中向Openflow交换机SW={D1,D2,…,Dd}输出controller-to-switch消息;对链接的发送缓冲区的操作需要进行同步;During the Openflow message sending process, the message-thread TH 3 ={C 1 ,C 2 ,…,C c } adopts the asynchronous non-blocking IO model from the message-thread TH 3 ={C 1 ,C 2 ,…,C c } link to belong Output the controller-to-switch message to the Openflow switch SW={D 1 , D 2 ,...,D d } in the sending buffer; The operation of the sending buffer needs to be synchronized; 步骤二:Openflow事件的并行处理Step 2: Parallel processing of Openflow events 针对Packet-in事件和Flow-Removed事件,首先生成如表1中Base_flow结构的流对象FLOWBase_flow={F1,F2,…,Ff},然后根据Base_flow结构中start方法生成流处理任务 TASK 3 - 1 = { FA P a c k e t - i n FLOW B a s e _ f l o w , FA F l o w - Re m o v e d FLOW B a s e _ f l o w } , 最后将该任务 TASK 3 - 1 = { FA P a c k e t - i n FLOW B a s e _ f l o w , FA F l o w - Re m o v e d FLOW B a s e _ f l o w } 发送到流处理模块(1)中的主线程本地任务队列Qz中;For the Packet-in event and Flow-Removed event, first generate the flow object FLOW Base_flow ={F 1 ,F 2 ,…,F f } as shown in the Base_flow structure in Table 1, and then generate the flow processing task according to the start method in the Base_flow structure TASK 3 - 1 = { FA P a c k e t - i no FLOW B a the s e _ f l o w , FA f l o w - Re m o v e d FLOW B a the s e _ f l o w } , Finally the task TASK 3 - 1 = { FA P a c k e t - i no FLOW B a the s e _ f l o w , FA f l o w - Re m o v e d FLOW B a the s e _ f l o w } Sent to the main thread local task queue Q z in the stream processing module (1); 针对流处理任务 TASK 3 - 1 = { FA P a c k e t - i n FLOW B a s e _ f l o w , FA F l o w - Re m o v e d FLOW B a s e _ f l o w } , 在流处理模块(1)中,主线程Az TASK 3 - 1 = { FA P a c k e t - i n FLOW B a s e _ f l o w , FA F l o w - Re m o v e d FLOW B a s e _ f l o w } 通过轮询的方式发送到流处理模块(1)的计算线程本地任务队列中;计算线程TH1={A1,A2,…,Aa}执行 TASK 3 - 1 = { FA P a c k e t - i n FLOW B a s e _ f l o w , FA F l o w - Re m o v e d FLOW B a s e _ f l o w } 中的具体任务;并动态生成流对象子任务将所述添加到 Q TH 1 = { Q 1 , Q 2 , ... , Q a } 中;For stream processing tasks TASK 3 - 1 = { FA P a c k e t - i no FLOW B a the s e _ f l o w , FA f l o w - Re m o v e d FLOW B a the s e _ f l o w } , In the stream processing module (1), the main thread A z will TASK 3 - 1 = { FA P a c k e t - i no FLOW B a the s e _ f l o w , FA f l o w - Re m o v e d FLOW B a the s e _ f l o w } Send to the calculation thread local task queue of the stream processing module (1) by polling middle; calculation thread TH 1 ={A 1 ,A 2 ,...,A a } execute TASK 3 - 1 = { FA P a c k e t - i no FLOW B a the s e _ f l o w , FA f l o w - Re m o v e d FLOW B a the s e _ f l o w } Concrete tasks in ; and dynamically generate flow object subtasks will be described add to Q TH 1 = { Q 1 , Q 2 , ... , Q a } middle; 计算线程TH1={A1,A2,…,Aa}执行 TASK 3 - 1 = { FA P a c k e t - i n FLOW B a s e _ f l o w , FA F l o w - Re m o v e d FLOW B a s e _ f l o w } 中的具体任务;并动态生成状态对象子任务根据所述的的global属性的值,进行判断;如果global为true,则表示该状态为全局共享状态,则将所述的给予状态处理模块(2),并等待状态处理模块(2)的任务完成消息STA2-1;反之,如果global不为true,则表示该状态为局部共享状态,则流处理模块(1)中的产生所述的线程直接执行;流处理模块(1)中的计算线程通过任务窃取的方式,进行负载均衡;Calculation thread TH 1 ={A 1 ,A 2 ,...,A a } executes TASK 3 - 1 = { FA P a c k e t - i no FLOW B a the s e _ f l o w , FA f l o w - Re m o v e d FLOW B a the s e _ f l o w } Specific tasks in ; and dynamically generate state object subtasks according to the The value of the global attribute is judged; if global is true, it means that the state is a global shared state, and the described Give the state processing module (2), and wait for the task of the state processing module (2) to complete the message STA 2-1 ; otherwise, if global is not true, it means that the state is a local shared state, then in the stream processing module (1) the generation of The threads in the stream processing module (1) perform load balancing through task stealing; 针对Port-status事件和Error事件,首先生成针对如表2中Base_state结构的状态对象STATEBase_state={S1,S2,…,Ss}的处理任务 TASK 3 - 2 = { SA P o r t - s t a t u s STATE B a s e _ s t a t e , SA E r r o r STATE B a s e _ s t a t e } , 然后将该任务 TASK 3 - 2 = { SA P o r t - s t a t u s STATE B a s e _ s t a t e , SA E r r o r STATE B a s e _ s t a t e } 发送到状态处理模块(2)的访问任务队列 P s t a t e STATE B a s e _ s t a t e = { P 1 , P 2 , ... , P s } 中;For the Port-status event and the Error event, first generate a processing task for the state object STATE Base_state ={S 1 , S 2 ,..., S s } of the Base_state structure in Table 2 TASK 3 - 2 = { SA P o r t - the s t a t u the s STATE B a the s e _ the s t a t e , SA E. r r o r STATE B a the s e _ the s t a t e } , then the task TASK 3 - 2 = { SA P o r t - the s t a t u the s STATE B a the s e _ the s t a t e , SA E. r r o r STATE B a the s e _ the s t a t e } Sent to the access task queue of the status processing module (2) P the s t a t e STATE B a the s e _ the s t a t e = { P 1 , P 2 , ... , P the s } middle; 针对任务 TASK 3 - 2 = { SA P o r t - s t a t u s STATE B a s e _ s t a t e , SA E r r o r STATE B a s e _ s t a t e } 和任务在状态处理模块(2)中,状态-线程TH2={B1,B2,…,Bb}中的B1中提取出属于B1的访问任务队列 P s t a t e B 1 = { P 1 , P 2 , ... , P s } ; 然后B1通过轮询的方式执行 P s t a t e B 1 = { P 1 , P 2 , ... , P s } 中的任务;当执行完成后,向流处理模块(1)发送的任务完成消息状态-线程TH2={B1,B2,…,Bb}中的B2中提取出属于B2的访问任务队列然后B2通过轮询的方式执行中的任务;当执行完成后,向流处理模块(1)发送的任务完成消息状态-线程TH2={B1,B2,…,Bb}中的Bb中提取出属于Bb的访问任务队列 P s t a t e B b = { P 1 , P 2 , ... , P s } ; 然后Bb通过轮询的方式执行 P s t a t e B b = { P 1 , P 2 , ... , P s } 中的任务;当执行完成后,向流处理模块(1)发送的任务完成消息 For the task TASK 3 - 2 = { SA P o r t - the s t a t u the s STATE B a the s e _ the s t a t e , SA E. r r o r STATE B a the s e _ the s t a t e } and tasks In the state processing module (2), B 1 in the state-thread TH 2 ={B 1 ,B 2 ,...,B b } starts from Extract the access task queue belonging to B1 from P the s t a t e B 1 = { P 1 , P 2 , ... , P the s } ; Then B 1 executes by polling P the s t a t e B 1 = { P 1 , P 2 , ... , P the s } tasks in ; when execution completes After that, the task completion message sent to the stream processing module (1) State - Thread TH 2 = B 2 in {B 1 ,B 2 ,...,B b } starts from Extract the access task queue belonging to B2 from Then B 2 executes by polling tasks in ; when execution completes After that, the task completion message sent to the stream processing module (1) State - Thread TH 2 = B b in {B 1 ,B 2 ,...,B b } starts from Extract the access task queue belonging to B b from P the s t a t e B b = { P 1 , P 2 , ... , P the s } ; Then B b executes by polling P the s t a t e B b = { P 1 , P 2 , ... , P the s } tasks in ; when execution completes After that, the task completion message sent to the stream processing module (1) 4.依据权利要求1所述的基于Openflow的事件并行控制器进行的事件并行处理方法,其特征在于:随着线程个数的增多具有更高的加速比。4. According to the event parallel processing method performed by the Openflow-based event parallel controller according to claim 1, it is characterized in that: with the increase of the number of threads, there is a higher speed-up ratio.
CN201310647876.0A 2013-12-04 2013-12-04 A kind of event concurrency controller based on Openflow and event concurrency disposal route thereof Expired - Fee Related CN103677760B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310647876.0A CN103677760B (en) 2013-12-04 2013-12-04 A kind of event concurrency controller based on Openflow and event concurrency disposal route thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310647876.0A CN103677760B (en) 2013-12-04 2013-12-04 A kind of event concurrency controller based on Openflow and event concurrency disposal route thereof

Publications (2)

Publication Number Publication Date
CN103677760A CN103677760A (en) 2014-03-26
CN103677760B true CN103677760B (en) 2015-12-02

Family

ID=50315439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310647876.0A Expired - Fee Related CN103677760B (en) 2013-12-04 2013-12-04 A kind of event concurrency controller based on Openflow and event concurrency disposal route thereof

Country Status (1)

Country Link
CN (1) CN103677760B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156260B (en) * 2014-08-07 2017-03-15 北京航空航天大学 The concurrent queue accesses control system that a kind of task based access control is stolen
CN104660696B (en) * 2015-02-10 2018-04-27 上海创景信息科技有限公司 Parallel transceiving construction system and construction method thereof
CN105991588B (en) 2015-02-13 2019-05-28 华为技术有限公司 A kind of method and device for defending message attack
CN109669724B (en) * 2018-11-26 2021-04-06 许昌许继软件技术有限公司 Multi-command concurrent proxy service method and system based on Linux system
CN110177146A (en) * 2019-05-28 2019-08-27 东信和平科技股份有限公司 A kind of non-obstruction Restful communication means, device and equipment based on asynchronous event driven
CN112380028A (en) * 2020-10-26 2021-02-19 上汽通用五菱汽车股份有限公司 Asynchronous non-blocking response type message processing method
CN116185662B (en) * 2023-02-14 2023-11-17 国家海洋环境预报中心 Asynchronous parallel I/O method based on NetCDF and non-blocking communication

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5968160A (en) * 1990-09-07 1999-10-19 Hitachi, Ltd. Method and apparatus for processing data in multiple modes in accordance with parallelism of program by using cache memory
CN103401777A (en) * 2013-08-21 2013-11-20 中国人民解放军国防科学技术大学 Parallel search method and system of Openflow

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103139265B (en) * 2011-12-01 2016-06-08 国际商业机器公司 Network adaptation transmitter optimization method in massive parallel processing and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5968160A (en) * 1990-09-07 1999-10-19 Hitachi, Ltd. Method and apparatus for processing data in multiple modes in accordance with parallelism of program by using cache memory
CN103401777A (en) * 2013-08-21 2013-11-20 中国人民解放军国防科学技术大学 Parallel search method and system of Openflow

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Hadoop Acceleration in and OpenFlow-based cluster;Sandhya Narayan等;《High Performance Computing,Networking Storage and Analysis(SCC),2012 SC Companion》;20121116;第535-538页 *
多核环境下基于分组的自适应任务调度算法;李博等;《2012全国高性能计算算术年会论文集》;20131112;第1-4页 *

Also Published As

Publication number Publication date
CN103677760A (en) 2014-03-26

Similar Documents

Publication Publication Date Title
CN103677760B (en) A kind of event concurrency controller based on Openflow and event concurrency disposal route thereof
CN103870340B (en) Data processing method, control node and stream computing system in stream computing system
CN104734915B (en) A kind of concurrent dynamic emulation method of Multi net voting of compound multi-process multithreading
CN103812949B (en) A kind of task scheduling towards real-time cloud platform and resource allocation methods and system
Guo et al. Exploiting efficient and scalable shuffle transfers in future data center networks
CN110308984B (en) Cross-cluster computing system for processing geographically distributed data
CN102801635B (en) Packet ordering method used in multi-core processor system
Sun et al. Recovery strategies for service composition in dynamic network
CN107807983A (en) A kind of parallel processing framework and design method for supporting extensive Dynamic Graph data query
CN103336756B (en) A kind of generating apparatus of data computational node
CN110502337B (en) An optimized system for the shuffle phase in Hadoop MapReduce
CN105871603A (en) Failure recovery system and method of real-time streaming data processing based on memory data grid
CN103699442B (en) Under MapReduce Computational frames can iterative data processing method
CN106656846A (en) Construction method of coordination layer in software defined network (SDN) architecture
CN108256263A (en) A kind of electric system hybrid simulation concurrent computational system and its method for scheduling task
Li et al. GSPN-based reliability-aware performance evaluation of IoT services
CN101788938A (en) Data backup method based on user storing actions
Ye et al. Galaxy: A resource-efficient collaborative edge ai system for in-situ transformer inference
CN110427270A (en) The dynamic load balancing method of distributed connection operator under a kind of network towards RDMA
CN105094990A (en) An efficient system and method for realizing large-scale data exchange
CN111193971B (en) A distributed computing interconnection network system and communication method for machine learning
CN106681795B (en) A Virtual Network Mapping Method for Node Local Topology and Available Resource Capacity
Qi et al. LIFL: A Lightweight, Event-driven Serverless Platform for Federated Learning
CN101980166A (en) A Time Sequence Control Method for Parallel Simulation of Cluster System
CN104333516A (en) Rotation rotation scheduling method for combined virtual output queue and crosspoint queue exchange structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210423

Address after: 100160, No. 4, building 12, No. 128, South Fourth Ring Road, Fengtai District, Beijing, China (1515-1516)

Patentee after: Kaixi (Beijing) Information Technology Co.,Ltd.

Address before: 100191 Haidian District, Xueyuan Road, No. 37,

Patentee before: BEIHANG University

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20151202

Termination date: 20211204

CF01 Termination of patent right due to non-payment of annual fee