CN103391256B - A kind of base station user face data processing optimization method based on linux system - Google Patents
A kind of base station user face data processing optimization method based on linux system Download PDFInfo
- Publication number
- CN103391256B CN103391256B CN201310315568.8A CN201310315568A CN103391256B CN 103391256 B CN103391256 B CN 103391256B CN 201310315568 A CN201310315568 A CN 201310315568A CN 103391256 B CN103391256 B CN 103391256B
- Authority
- CN
- China
- Prior art keywords
- buffer
- message
- packet receiving
- base station
- management unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Mobile Radio Communication Systems (AREA)
Abstract
Description
技术领域 technical field
本发明涉及无线通信技术领域,尤其是涉及一种基于Linux系统的基站用户面数据处理优化方法。 The invention relates to the technical field of wireless communication, in particular to a Linux system-based base station user plane data processing optimization method.
背景技术 Background technique
随着无线通信技术的进步,对于LTE(3GPP长期演进技术)基站用户面数据处理,特别是下行方向的要求越来越高。LTE基站用户面数据主要是承载于UDP(用户数据报协议)之上的GTPU(隧道协议)业务,传统的处理方式是利用网卡收取报文,网卡通过中断形式通知Linux内核,Linux内核再经过网络协议栈的层层处理,对于GTPU业务来说,包括以太层、IP(网络互连协议)层、UDP层、Socket(网络插口)层,唤醒用户态程序并将GTPU报文拷贝到用户空间。 With the advancement of wireless communication technology, the requirements for user plane data processing of LTE (3GPP Long Term Evolution) base stations, especially in the downlink direction, are getting higher and higher. LTE base station user plane data is mainly GTPU (Tunneling Protocol) service carried on UDP (User Datagram Protocol). The traditional processing method is to use the network card to receive the message, and the network card notifies the Linux kernel through an interrupt, and the Linux kernel passes through the network The layer-by-layer processing of the protocol stack, for GTPU business, includes the Ethernet layer, IP (Internetwork Interconnection Protocol) layer, UDP layer, Socket (network socket) layer, wakes up the user mode program and copies the GTPU message to the user space.
传统方法的弊端包括: Drawbacks of traditional methods include:
拷贝过多:每个GTPU报文,经过Linux内核的处理后,需要从内核空间拷贝到用户空间。 Too many copies: Each GTPU packet needs to be copied from the kernel space to the user space after being processed by the Linux kernel.
上下文切换过多:GTPU收包socket采用阻塞式输入方式,每调用一次接收函数接收一个UDP报文都有一次系统调用的过程,即内核态和用户态上下文切换的过程。 Excessive context switching: The GTPU packet receiving socket adopts a blocking input method. Every time the receiving function is called to receive a UDP packet, there is a system call process, that is, the process of context switching between the kernel mode and the user mode.
中断过多:在较高网络负载情况下,网卡收包中断过多。过多的中断会严重降低Linux系统的实时性。 Too many interruptions: In the case of high network load, the network card receives packets and receives too many interruptions. Too many interrupts will seriously reduce the real-time performance of the Linux system.
发明内容 Contents of the invention
本发明提出了一种基于Linux系统的基站用户面数据处理优化方法,其目的是在LTE基站用户面数据处理过程中,减少上下文切换,中断以及拷贝。 The present invention proposes a Linux system-based base station user plane data processing optimization method, and its purpose is to reduce context switching, interruption and copying in the process of LTE base station user plane data processing.
本发明的技术方案为一种基于Linux系统的基站用户面数据处理优化方法,采用包处理加速模块进行用户面数据处理,所述包处理加速模块包括缓冲区管理单元、网络数据帧管理单元和队列管理单元,网络数据帧管理单元中包括包分类器,包括初始化过程和数据传输过程, The technical solution of the present invention is a Linux-based base station user plane data processing optimization method, using a packet processing acceleration module to process user plane data, and the packet processing acceleration module includes a buffer management unit, a network data frame management unit and a queue Management unit, the network data frame management unit includes a packet classifier, including an initialization process and a data transmission process,
所述初始化过程包括以下子步骤, The initialization process includes the following sub-steps,
步骤1.1,定义包处理加速模块的包分类器的包分类规则,所述包分类规则是分类区分出基站的用户面数据的规则,所述用户面数据为GTPU报文,GTPU表示隧道协议; Step 1.1, defining the packet classification rules of the packet classifier of the packet processing acceleration module, the packet classification rules are the rules for classifying and distinguishing the user plane data of the base station, the user plane data is a GTPU message, and GTPU represents a tunneling protocol;
步骤1.2,建立缓冲区管理单元的缓冲区,包括在内核初始化时,预留内存块作为缓冲区管理单元的缓冲区,将一个内存块切分为多个大小相等的格子,将每个格子的物理地址和大小通知给缓冲区管理单元; Step 1.2, establish the buffer of the buffer management unit, including when the kernel is initialized, reserve a memory block as the buffer of the buffer management unit, divide a memory block into multiple grids of equal size, and divide each grid The physical address and size are notified to the buffer management unit;
步骤1.3,在内核的设备树文件中,将LTE基站与核心网相连的网络数据帧管理单元端口和步骤1.2中所建立缓冲区管理单元的缓冲区关联在一起; Step 1.3, in the device tree file of the kernel, associate the network data frame management unit port of the LTE base station connected to the core network with the buffer zone of the buffer management unit established in step 1.2;
步骤1.4,建立环形缓冲区,包括在驱动初始化时,预留一个内存块,将内存块切分为多个大小相等的格子,用来存储报文描述符,所述报文描述符中的信息包括GTPU报文的数据包的地址偏移和长度;内存块的头结构中保存控制环形缓冲区的读指针和写指针,所述写指针是内核向环形缓冲区填写数据的相应格子的编号,读指针是用户态收包进程从环形缓冲区读取数据的相应格子的编号;所述用户态收包进程,是根据从环形缓冲区读取的报文描述符中的信息直接访问缓冲区管理单元的缓冲区,将读出的GTPU报文中数据部分再组成消息递交给其它业务模块处理的进程; Step 1.4, establish a ring buffer, including reserving a memory block when the driver is initialized, and dividing the memory block into a plurality of equal-sized grids for storing message descriptors, and the information in the message descriptors Comprise the address offset and the length of the data packet of GTPU message; Save the read pointer and the write pointer of control ring buffer in the head structure of memory block, described write pointer is the numbering of the corresponding grid that kernel fills in data to ring buffer, The read pointer is the serial number of the corresponding grid where the user state packet receiving process reads data from the ring buffer; the user state packet receiving process directly accesses the buffer management according to the information in the message descriptor read from the ring buffer The buffer of the unit, the data part of the read GTPU message is composed of a message and submitted to other business modules for processing;
步骤1.5,用户态收包进程初始化时,将步骤1.2所建立缓冲区管理单元的缓冲区和步骤1.4所建立环形缓冲区所在的物理地址空间映射到用户空间; Step 1.5, when the user state packet receiving process is initialized, the buffer of the buffer management unit established in step 1.2 and the physical address space where the ring buffer established in step 1.4 is located are mapped to user space;
所述数据传输过程包括以下子步骤, The data transmission process includes the following sub-steps,
步骤2.1,包处理加速模块的包分类器区分出基站用户面数据,将基站用户面数据存放在缓冲区管理单元的缓冲区中,并将相应的报文描述符入队到网络数据帧管理单元的相应队列,产生中断通知内核收包; Step 2.1, the packet classifier of the packet processing acceleration module distinguishes the user plane data of the base station, stores the user plane data of the base station in the buffer of the buffer management unit, and enqueues the corresponding message descriptor to the network data frame management unit The corresponding queue generates an interrupt to notify the kernel to receive the packet;
步骤2.2,进行内核收包,由QMAN的收包中断处理回调函数完成,所述收包中断处理回调函数首先关闭收包中断,进入轮询状态,将报文描述符中的信息填写到环形缓冲区中,递增环形缓冲区的写指针,唤醒用户态收包进程;收包中断处理回调函数在每一次轮询中,统计此次轮询中收到的隧道协议数据的报文个数,如果个数小于预设阈值则结束轮询状态,重新开启收包中断;收包中断时,用户态收包进程睡眠在环形缓冲区驱动定义的等待队列上; Step 2.2, the kernel receives packets, which is completed by the packet receiving interrupt processing callback function of QMAN. The packet receiving interrupt processing callback function first closes the packet receiving interrupt, enters the polling state, and fills the information in the message descriptor into the ring buffer In the area, increment the write pointer of the ring buffer to wake up the process of receiving packets in the user mode; the packet receiving interrupt processing callback function counts the number of tunnel protocol data packets received in this polling in each polling, if If the number is less than the preset threshold, the polling state will end, and the packet receiving interrupt will be restarted; when the packet receiving is interrupted, the user state packet receiving process will sleep on the waiting queue defined by the ring buffer driver;
步骤2.3,睡眠在环形缓冲区驱动定义的等待队列上的用户态收包进程被唤醒后,根据报文描述符中的信息直接访问缓冲区管理单元的缓冲区,略过其他信息将GTPU报文的数据部分再组成消息递交给其它业务模块处理。 Step 2.3, after the user state packet receiving process sleeping on the waiting queue defined by the ring buffer driver is woken up, it directly accesses the buffer of the buffer management unit according to the information in the message descriptor, skips other information and sends the GTPU message The data part of the message is then composed and submitted to other business modules for processing.
而且,步骤2.3中,用户态收包进程通过剥去以太网头部、VLAN头部、IP头部以及UDP头部信息,将GTPU报文中的数据部分,组装成消息递交给其他业务模块。 Moreover, in step 2.3, the user state packet receiving process assembles the data part in the GTPU message into a message and submits it to other service modules by stripping off the Ethernet header, VLAN header, IP header and UDP header information.
而且,在环形缓冲区的每个格子中对格子是否使用加以标记。 Also, whether the grid is used or not is marked in each grid of the ring buffer.
而且,对于不经过协议栈处理的GTPU报文,当网口进入混杂模式时,拷贝一份数据包至协议栈,并在报文中做相应标记,避免上层重复处理。 Moreover, for GTPU messages that are not processed by the protocol stack, when the network port enters the promiscuous mode, a copy of the data packet is copied to the protocol stack, and a corresponding mark is made in the message to avoid repeated processing by the upper layer.
而且,步骤2.2中,如果环形缓冲区已满,则将新收到的报文描述符丢弃,并释放该报文描述符所对应缓冲区管理单元的缓冲区内单元格。 Moreover, in step 2.2, if the ring buffer is full, the newly received message descriptor is discarded, and the cells in the buffer of the buffer management unit corresponding to the message descriptor are released.
而且,步骤1.2中,建立缓冲区管理单元的缓冲区时,预留一块以上内存块作为缓冲区管理单元的缓冲区,各内存块按不同大小切分格子。 Moreover, in step 1.2, when establishing the buffer of the buffer management unit, more than one memory block is reserved as the buffer of the buffer management unit, and each memory block is divided into grids according to different sizes.
本发明对比传统LTE用户面数据处理技术有以下创新点: Compared with the traditional LTE user plane data processing technology, the present invention has the following innovations:
1.利用包分类技术,区分出可以略去标准协议栈处理,即需要优化处理的LTE用户面数据,而需要标准协议栈处理的报文,如ICMP(互联网控制报文协议)、ARP(地址解析协议)等报文,此类报文数量少且对性能要求不高,仍然递交给Linux协议栈处理,从而避免了一般优化技术必须实现整个用户态协议栈的“一刀切”方式。 1. Use the packet classification technology to distinguish the LTE user plane data that can be omitted from the standard protocol stack processing, that is, the LTE user plane data that needs to be optimized, and the packets that need to be processed by the standard protocol stack, such as ICMP (Internet Control Message Protocol), ARP (Address Parsing protocol) and other messages, which are small in number and do not require high performance, are still submitted to the Linux protocol stack for processing, thus avoiding the "one size fits all" approach that general optimization technologies must implement the entire user mode protocol stack.
2.利用中断轮询智能切换技术,使得硬件中断数目大幅下降,避免硬中断过多导致实时业务任务的实时性;中断轮询智能切换技术依据报文进入的速率智能的在中断方式和轮询方式之间切换,一方面能保证在高速率报文进入时的处理效率,另一方面也能保证在低速率报文进入时的低延时。 2. Using the interrupt polling intelligent switching technology, the number of hardware interrupts is greatly reduced, avoiding the real-time nature of real-time business tasks caused by too many hard interrupts; the interrupt polling intelligent switching technology intelligently switches between the interrupt mode and polling according to the rate of message entry Switching between modes, on the one hand, can ensure the processing efficiency when high-rate packets enter, and on the other hand, can ensure low delay when low-rate packets enter.
3.利用无锁队列技术,完成单生产者和单消费者的同步,避免频繁的系统调用。 3. Use lock-free queue technology to complete the synchronization of single producer and single consumer, avoiding frequent system calls.
4.利用内核-用户空间内存映射技术,避免内存拷贝。 4. Use kernel-user space memory mapping technology to avoid memory copying.
附图说明 Description of drawings
图1为本发明实施例中BMAN缓冲区的结构图。 FIG. 1 is a structural diagram of a BMAN buffer in an embodiment of the present invention.
图2为本发明实施例中环形缓冲区的操作原理示意图。 Fig. 2 is a schematic diagram of the operating principle of the ring buffer in the embodiment of the present invention.
图3为本发明实施例中环形缓冲区为空的示意图。 FIG. 3 is a schematic diagram of an empty ring buffer in an embodiment of the present invention.
图4为本发明实施例中环形缓冲区为满的示意图。 FIG. 4 is a schematic diagram of a ring buffer being full in an embodiment of the present invention.
图5为本发明实施例中BMAN缓冲区与环形缓冲区的交互图。 FIG. 5 is an interaction diagram between a BMAN buffer and a ring buffer in an embodiment of the present invention.
图6为本发明实施例中包分类器的分类逻辑图。 Fig. 6 is a classification logic diagram of the packet classifier in the embodiment of the present invention.
具体实施方式 detailed description
本发明主要针对LTE基站用户面数据处理的优化,适用但并不限于LTE基站,本方案同样适用于其它需要在嵌入式Linux系统中实现高效用户面数据处理的系统。该方案能满足无线通信基站建设中高速数据传输的需求,有效减少系统资源的占用,本设计充分利用包分类技术、中断轮询智能切换技术,无锁队列技术以及内存映射技术等一系列前沿技术,能有效减少中断数、减少进程上下文切换以及完全避免了数据拷贝。 The present invention is mainly aimed at the optimization of user plane data processing of LTE base stations, and is applicable to but not limited to LTE base stations. This solution is also applicable to other systems that need to realize efficient user plane data processing in an embedded Linux system. This solution can meet the needs of high-speed data transmission in the construction of wireless communication base stations, and effectively reduce the occupation of system resources. This design makes full use of a series of cutting-edge technologies such as packet classification technology, interrupt polling intelligent switching technology, lock-free queue technology and memory mapping technology. , can effectively reduce the number of interrupts, reduce process context switching and completely avoid data copying.
以下结合附图和实施例详细说明本发明技术方案。 The technical solution of the present invention will be described in detail below in conjunction with the drawings and embodiments.
初始化过程包括依次执行如下子步骤: The initialization process includes the following sub-steps in sequence:
步骤1.1,定义包处理加速模块的包分类器的包分类规则,区分出基站的用户面数据(即GTPU报文,其特征是UDP目的端口号为2152)。 Step 1.1, define the packet classification rules of the packet classifier of the packet processing acceleration module, and distinguish the user plane data of the base station (that is, the GTPU message, characterized by the UDP destination port number being 2152).
具体实施时,本步骤可根据具体实现包处理加速模块使用的CPU中的协处理器实现。实施例中,采用现有技术中的飞思卡尔PowerPC,飞思卡尔PowerPC采用DPAA(数据路径加速架构)实现包处理加速模块,其中提供了网络数据帧管理单元,缓冲区管理单元,队列管理单元,一般分别简称为FMAN、BMAN、QMAN,网络数据帧管理单元中设有包分类器。定义飞思卡尔PowerPC的包分类器的分类规则,并配置到硬件中去。 During specific implementation, this step can be implemented according to the coprocessor in the CPU used by the packet processing acceleration module. In the embodiment, Freescale PowerPC in the prior art is adopted, and Freescale PowerPC adopts DPAA (Data Path Acceleration Architecture) to realize the packet processing acceleration module, wherein a network data frame management unit, a buffer management unit, and a queue management unit are provided , generally referred to as FMAN, BMAN, and QMAN respectively, and a packet classifier is set in the network data frame management unit. Define the classification rules of Freescale PowerPC's packet classifier and configure them in the hardware.
此步骤的目的是区分出需要优化处理的报文。需要优化处理的报文将不再经过Linux的网络协议栈的繁杂处理,对于LTE用户面数据来讲,此类报文即为GTPU报文(UDP目的端口号为2152);不需要优化处理的报文,即需要TCP/IP协议栈处理的报文,例如ICMP报文、ARP报文等。可根据GTPU报文特征是UDP目的端口号为2152设定包分类规则,后续数据传输过程按包分类规则将两种报文分别入队到不同的队列中。 The purpose of this step is to distinguish the packets that need to be optimized. Packets that need to be optimized will no longer be processed by the Linux network protocol stack. For LTE user plane data, such packets are GTPU packets (UDP destination port number is 2152); Packets are packets that need to be processed by the TCP/IP protocol stack, such as ICMP packets and ARP packets. The packet classification rules can be set according to the characteristics of the GTPU message that the UDP destination port number is 2152, and the subsequent data transmission process will enqueue the two types of messages into different queues according to the packet classification rules.
步骤1.2,建立BMAN缓冲区,包括在内核初始化时,通过分配内核内存预留内存块,将一个内存块切分为多个大小相等的格子,并将每个格子的物理地址和大小通知给DPAA模块的BMAN(缓冲区管理)单元。为满足以太网标准MTU(最大传输单元)要求,格子大小一般设定为约2K字节。 Step 1.2, establish the BMAN buffer, including dividing a memory block into multiple equal-sized grids by allocating kernel memory reserved memory blocks during kernel initialization, and notifying DPAA of the physical address and size of each grid The BMAN (buffer management) unit of the module. In order to meet the Ethernet standard MTU (Maximum Transmission Unit) requirements, the grid size is generally set to about 2K bytes.
实施例中,内核初始化时,预留一个内存块作为BMAN缓冲区。如图1所示,将内存块切分为多个大小相等,每个大小为2112字节的单元格,一个4兆字节的内存块可分为1985个单元格,如图中0、2112、2112*2…2112*1984,计算机领域一般采用*表示乘以×。2112字节为64字节的奇数倍,这个大小的选择可以容纳1500字节的标准MTU,并允许硬件添加一些附件信息,且能均匀利用PowerPC4080的两个三级缓存,分格完成后,将每个格子的物理地址和大小通知给DPAA模块的BMAN单元。为避免浪费物理内存,也可以预留多个内存块,每个内存块作为不同的BMAN缓冲区,每块内存中的单元格大小不一,如64字节,172字节,320字节等,DPAA硬件模块会根据网络报文帧的大小智能选择最合适的大小,如173字节会选择320字节大小的BMAN缓冲区,100字节会选择172字节大小的BMAN缓冲区。 In the embodiment, when the kernel is initialized, a memory block is reserved as a BMAN buffer. As shown in Figure 1, the memory block is divided into multiple cells of equal size, each with a size of 2112 bytes, and a 4 megabyte memory block can be divided into 1985 cells, as shown in the figure 0, 2112 , 2112*2...2112*1984, in the computer field, * is generally used to represent multiplication by ×. 2112 bytes is an odd multiple of 64 bytes. This size selection can accommodate a standard MTU of 1500 bytes, and allows the hardware to add some additional information, and can evenly use the two L3 caches of PowerPC4080. After the division is completed, the The physical address and size of each grid is notified to the BMAN unit of the DPAA module. In order to avoid wasting physical memory, multiple memory blocks can also be reserved, each memory block is used as a different BMAN buffer, and the cells in each memory block have different sizes, such as 64 bytes, 172 bytes, 320 bytes, etc. , the DPAA hardware module will intelligently select the most appropriate size according to the size of the network message frame, such as 173 bytes will select a 320-byte BMAN buffer, and 100 bytes will select a 172-byte BMAN buffer.
步骤1.3,FMAN(网络数据帧管理单元)端口与BMAN缓冲区的关联,需要在内核的DTS(设备树)文件中,将LTE基站与核心网相连的FMAN端口和步骤1.2中所建立的BMAN缓冲区关联在一起,若分配了多个BMAN缓冲区,也可以将多个BMAN缓冲区与此端口绑定在一起。 In step 1.3, the association between the FMAN (network data frame management unit) port and the BMAN buffer requires the FMAN port connecting the LTE base station to the core network and the BMAN buffer established in step 1.2 in the DTS (device tree) file of the kernel Areas are associated together. If multiple BMAN buffers are allocated, multiple BMAN buffers can also be bound to this port.
步骤1.4,建立环形缓冲区,包括在驱动初始化时,通过分配内核内存预留一个内存块,将内存块切分为多个大小相等的格子,用来存储GTPU报文的数据包的地址偏移和长度,所有的格子共有一个头结构,头结构中再保存控制环形缓冲区的读写指针,完成内核(生产者)和用户态收包进程(消费者)之间的同步。头结构中的写指针,代表生产者填数据的格子的编号,读指针,代表消费者读取数据的格子的编号。具体实施时,环形缓冲区的大小定义为2的幂有利于提高读写指针操作的效率。 Step 1.4, establish a ring buffer, including when the driver is initialized, reserve a memory block by allocating kernel memory, divide the memory block into multiple grids of equal size, and use it to store the address offset of the data packet of the GTPU message and length, all grids share a header structure, and the header structure stores the read and write pointers that control the ring buffer to complete the synchronization between the kernel (producer) and the user state packet receiving process (consumer). The write pointer in the header structure represents the number of the grid where the producer fills in the data, and the read pointer represents the number of the grid where the consumer reads the data. During specific implementation, the size of the ring buffer is defined as a power of 2, which is beneficial to improve the efficiency of reading and writing pointer operations.
实施例中,环形缓冲区的同步操作示意图见图2,利用环形缓冲区,将不再需要传统的互斥锁结构。初始时,环形缓冲区读指针和写指针都为零。生产者每生产一个数据后,即内核驱动收到一个GTPU报文后,需递增写指针,当写指针递增超过环形缓冲区中格子的总数后,需将递增后的写指针以格子总数取余。例如,假定环形缓冲区中格子总数为256,以0到255编号。递增前写指针为255,写指针递增后为256,以256取余,变为0,即写指针回绕到0,重新指向环形缓冲区中的第一个格子。将上述操作用NEXT表示,故NEXT(X)=(X+1)%N,其中X为写指针,N为环形缓冲区中格子的个数。同理,当消费者每取走一个数据后,即用户态收包进程处理完一个GTPU报文后,也需要用上述方法改写读指针。如图3所示,当读指针和写指针相等时,代表生产者生产的数据已经被消费者全部取走,且生产者还未能生产新的数据,此时用户态程序需睡眠,以等待生产者生产更多的数据。如图4所示,当写指针用上述的NEXT操作处理后结果等于读指针时,代表缓冲区满,生产者需要主动丢弃后续收到的GTPU报文,环形缓冲区中的格子的个数代表系统可以容忍的生产者和消费者处理速率的差异程度。驱动还需要提供内存映射函数的实现,作为桥梁完成BMAN缓冲区内存与用户空间虚拟地址空间之间的映射。 In the embodiment, the synchronous operation schematic diagram of the ring buffer is shown in FIG. 2 , and the traditional mutual exclusion lock structure is no longer needed by using the ring buffer. Initially, the ring buffer read pointer and write pointer are both zero. Every time the producer produces a piece of data, that is, after the kernel driver receives a GTPU message, it needs to increment the write pointer. When the increase of the write pointer exceeds the total number of grids in the ring buffer, it needs to take the remainder of the incremented write pointer by the total number of grids . For example, assume that the total number of grids in the ring buffer is 256, numbered from 0 to 255. The write pointer is 255 before incrementing, and it is 256 after incrementing. The remainder of 256 becomes 0, that is, the write pointer wraps around to 0 and points to the first grid in the ring buffer again. Express the above operation with NEXT, so NEXT(X)=(X+1)%N, where X is the write pointer, and N is the number of grids in the ring buffer. Similarly, when the consumer fetches a piece of data, that is, after the user-mode packet receiving process finishes processing a GTPU packet, it also needs to use the above method to rewrite the read pointer. As shown in Figure 3, when the read pointer is equal to the write pointer, it means that the data produced by the producer has been taken away by the consumer, and the producer has not yet produced new data. At this time, the user mode program needs to sleep to wait Producers produce more data. As shown in Figure 4, when the result of the write pointer is equal to the read pointer after the above NEXT operation, it means that the buffer is full, and the producer needs to actively discard subsequent received GTPU packets. The number of grids in the ring buffer represents The degree of difference in producer and consumer processing rates that the system can tolerate. The driver also needs to provide the implementation of the memory mapping function, as a bridge to complete the mapping between the BMAN buffer memory and the virtual address space of the user space.
实施例提供了用户态收包进程,该进程用于根据从环形缓冲区读取的报文描述符中的信息直接访问BMAN内存,将GTPU报文再组成消息递交给其它业务模块处理。 The embodiment provides a user state packet receiving process, which is used to directly access the BMAN memory according to the information in the message descriptor read from the ring buffer, and recompose the GTPU message to other business modules for processing.
步骤1.5,用户态收包进程初始化时,将步骤1.2和步骤1.4中分配的内核内存,映射到用户空间。 In step 1.5, when the user mode packet receiving process is initialized, the kernel memory allocated in step 1.2 and step 1.4 is mapped to the user space.
实施例中,用户态收包进程初始化时,将步骤1.2所建立BMAN缓冲区和步骤1.4所建立环形缓冲区所在的物理地址空间映射到用户空间,即从内核空间的一段映射到用户空间的一段,从而能避免网络数据从内核态到用户态的拷贝。 In the embodiment, when the user state packet receiving process is initialized, the physical address space where the BMAN buffer and the ring buffer established in step 1.4 are established in step 1.2 are mapped to user space, that is, a section of kernel space is mapped to a section of user space , so as to avoid the copying of network data from kernel mode to user mode.
使用过程包括依次执行如下子步骤:: The use process includes the following sub-steps in sequence:
步骤2.1,DPAA包分类器区分出基站用户面数据,将其存放在DPAA的BMAN缓冲器中,并将相应的报文描述符(描述报文的存储地址和大小)入队到DPAA的QMAN(队列管理)单元中相应队列,QMAN单元产生中断通知内核收包。 Step 2.1, the DPAA packet classifier distinguishes the user plane data of the base station, stores it in the BMAN buffer of DPAA, and enqueues the corresponding message descriptor (describing the storage address and size of the message) to the QMAN of DPAA ( The corresponding queue in the queue management) unit, the QMAN unit generates an interrupt to notify the kernel to receive the packet.
实施例中,LTE基站的网络接口收到核心网侧发来的报文后,DPAA模块中的硬件包分类器依据初始化时步骤1.1定义的包分类语法对报文进行包分类,将UDP目的端口号为2152的GTPU报文描述符入队到一个队列,而将其他所有报文描述符入队到另外一个队列,具体实施时可以由预设队列编号,可在0x1到0xFFFF之间分别任意取值。包分类逻辑如图6所示:报文进入包分类器,包分类器判断是否是IP分片,是则入队到0x2000队列,否则继续判断目的端口是否是2152,是2152则入队到0x2001队列,否则仍入队到0x2000队列,包分类结束。报文描述符记录的信息包含初始化时BMAN缓冲区中每个单元格的起始物理地址以及报文的大小,硬件将报文描述符入队后,会产生相应的硬件中断。 In the embodiment, after the network interface of the LTE base station receives the message sent from the core network side, the hardware packet classifier in the DPAA module performs packet classification on the message according to the packet classification grammar defined in step 1.1 during initialization, and the UDP destination port The GTPU message descriptor with the number 2152 is enqueued to one queue, and all other message descriptors are enqueued to another queue. The specific implementation can use the preset queue number, which can be arbitrarily selected between 0x1 and 0xFFFF value. The packet classification logic is shown in Figure 6: the packet enters the packet classifier, and the packet classifier judges whether it is an IP fragment. If it is, it enters the queue at 0x2000; otherwise, it continues to judge whether the destination port is 2152. If it is 2152, it enters the queue at 0x2001 queue, otherwise it will still enter the queue at 0x2000, and the packet classification ends. The information recorded in the message descriptor includes the initial physical address of each cell in the BMAN buffer and the size of the message. After the hardware enqueues the message descriptor, a corresponding hardware interrupt will be generated.
步骤2.2,进行内核收包,内核收包由QMAN的收包中断处理回调函数完成。收包中断时,用户态收包进程睡眠在环形缓冲区驱动定义的等待队列上。所述收包中断处理回调函数的内容包括首先关闭收包中断,进入轮询模式,将报文描述符中的信息填写到环形缓冲区中,递增环形缓冲区的写指针,唤醒用户态收包进程。收包中断处理回调函数在一次轮询中,会统计此次轮询中收到的GTPU报文个数,如果个数小于预设阈值(预设阈值可由本领域技术人员预先自定义,如可设置为64)将重新开启中断,即由轮询模式又回到了中断模式,如果一次轮询中收到的GTPU报文个数不小于预算,则不会重新开启中断,即保持在轮询状态。 In step 2.2, the kernel receives the packet, and the kernel receives the packet by the packet receiving interrupt processing callback function of QMAN to complete. When packet receiving is interrupted, the user mode packet receiving process sleeps on the waiting queue defined by the ring buffer driver. The content of the packet receiving interrupt processing callback function includes first closing the packet receiving interrupt, entering the polling mode, filling the information in the message descriptor into the ring buffer, incrementing the write pointer of the ring buffer, and waking up the user state to receive the packet process. In a polling, the packet receiving interrupt processing callback function will count the number of GTPU packets received in this polling, if the number is less than the preset threshold (the preset threshold can be pre-defined by those skilled in the art, such as Setting it to 64) will re-enable the interrupt, that is, the polling mode has returned to the interrupt mode. If the number of GTPU packets received in one poll is not less than the budget, the interrupt will not be re-opened, that is, it will remain in the polling state .
实施例中,内核收包由QMAN的收包中断处理回调函数完成。首先关闭收包中断,进入轮询状态。由于步骤2.1中已经将GTPU报文和非GTPU报文入队到不同的队列中,因此可以根据队列序号的不同做出不同的处理。如果是GTPU报文,则将报文描述符中的信息填写到环形缓冲区中,BMAN缓冲区与环形缓冲区的交互如图5所示,图中内核-用户空间映射内存即为内核分配出的BMAN缓冲区,X代表此区域的起始地址,X+2112、X+2112×2等代表缓冲区中每个格子的起始地址,可见每个格子的长度为2112字节。用户态收包进程将此段地址映射到自己的虚拟地址空间。当一个GTPU报文进入时,DPAA硬件模块会将此报文放置在BMAN缓冲区中某个单元格中,假定放置在X+2112×4到X+2112×5这段物理地址空间,内核驱动此单元格在内核-用户空间映射内存中的地址偏移2112×4以及GTPU报文的实际长度填写到环形缓冲区中当前写指针指向的环形缓冲区单元格中,并递增环形缓冲区的写指针,然后唤醒等待在环形缓冲区定义的等待队列上的用户态收包进程。内核驱动收包时,如果环形缓冲区已满,则将报文描述符丢弃,并释放该报文描述符对应的BMAN缓冲区内存的单元格。当递增写指针后,其实就可以释放写指针所指向的BMAN缓冲区单元格了,但由于一次BMAN缓冲区内单元格释放对应的硬件操作最多可以释放8个BMAN缓冲区单元格,而一次硬件操作调用开销较大,故为提高效率,可将写指针指向的单元格地址缓存起来,累计到8个以后一次释放相应BMAN缓冲区内存的单元格。由于释放时,需要提取环形缓冲区中保存的单元格地址以及大小等信息,故需要设置一个变量用来判断是否该单元格已经被使用过一次,即被填充过一次BMAN缓冲区单元格的地址。可以在环形缓冲区的每个单元格中对单元格是否使用加以标记(key值),以避免释放没有填充过的缓冲区。 In the embodiment, the packet receiving by the kernel is completed by the packet receiving interrupt processing callback function of QMAN. First close the packet receiving interrupt and enter the polling state. Since the GTPU message and the non-GTPU message have been enqueued into different queues in step 2.1, different processing may be performed according to different queue numbers. If it is a GTPU message, fill in the information in the message descriptor into the ring buffer. The interaction between the BMAN buffer and the ring buffer is shown in Figure 5. In the figure, the kernel-user space mapped memory is allocated by the kernel. In the BMAN buffer, X represents the start address of this area, and X+2112, X+2112×2, etc. represent the start address of each grid in the buffer, and it can be seen that the length of each grid is 2112 bytes. The user state packet receiving process maps this segment address to its own virtual address space. When a GTPU message comes in, the DPAA hardware module will place the message in a cell in the BMAN buffer, assuming it is placed in the physical address space from X+2112×4 to X+2112×5, the kernel driver The address offset of this cell in the kernel-user space mapped memory is 2112×4 and the actual length of the GTPU message is filled in the ring buffer cell pointed to by the current write pointer in the ring buffer, and the write value of the ring buffer is incremented. Pointer, and then wake up the user mode receiving process waiting on the waiting queue defined by the ring buffer. When the kernel driver receives packets, if the ring buffer is full, the message descriptor is discarded, and the cell of the BMAN buffer memory corresponding to the message descriptor is released. After incrementing the write pointer, the BMAN buffer cell pointed to by the write pointer can actually be released, but because the hardware operation corresponding to the cell release in the BMAN buffer can release up to 8 BMAN buffer cells, and a hardware The operation call overhead is high, so in order to improve efficiency, the cell address pointed to by the write pointer can be cached, and after accumulating to 8 cells, the corresponding BMAN buffer memory will be released at a time. Since it is necessary to extract the cell address and size information stored in the ring buffer when releasing, it is necessary to set a variable to determine whether the cell has been used once, that is, the address of the BMAN buffer cell that has been filled once . You can mark whether the cell is used (key value) in each cell of the ring buffer to avoid freeing the buffer that has not been filled.
另外,由于调试网络问题时,最常见的方法就是通过TCPDUMP(网络抓包工具)将报文抓取出来,而由于GTPU报文此时旁路了Linux网络协议栈,因此,当网络接口进入混杂模式时,即TCPDUMP开启时,可以将GTPU报文拷贝到sk_buff(Linux网络协议栈中描述报文的数据结构)中去,并在sk_buff结构体中做特殊标记,再递交协议栈,一方面满足调试的需求,一方面特殊标记可以让协议栈不再重复的将此类报文递交给上层用户态处理程序。如果不是GTPU报文,意味着此类报文是需要Linux的网络协议栈处理的报文,由于网络协议栈必须使用sk_buff结构,回调函数中将报文描述符中描述的报文拷贝到内核内存管理分配器动态分配出来的sk_buff以及其辅助结构体中,调用相关函数交由协议栈处理,由于已经拷贝过了,因此BMAN缓冲区单元格可以立即释放,而sk_buff相关内存则交由网络协议栈释放。 In addition, when debugging network problems, the most common method is to capture packets through TCPDUMP (network capture tool), and because GTPU packets bypass the Linux network protocol stack at this time, when the network interface enters the promiscuous mode, that is, when TCPDUMP is enabled, you can copy the GTPU message to sk_buff (the data structure describing the message in the Linux network protocol stack), and make a special mark in the sk_buff structure, and then submit it to the protocol stack. On the one hand, it satisfies For debugging needs, on the one hand, the special mark can prevent the protocol stack from repeatedly submitting such messages to the upper-layer user mode processing program. If it is not a GTPU message, it means that this type of message needs to be processed by the Linux network protocol stack. Since the network protocol stack must use the sk_buff structure, the callback function copies the message described in the message descriptor to the kernel memory. In the sk_buff dynamically allocated by the management allocator and its auxiliary structure, the call-related functions are handed over to the protocol stack. Since they have been copied, the BMAN buffer cells can be released immediately, and the memory related to sk_buff is handed over to the network protocol stack. freed.
收包回调在一次轮询中,如果处理的报文数目小于预算,说明此时网络报文进入系统较慢,可能是收包回调在处理多次后,BMAN缓冲区中已经没有数据进入,则重新开启收包中断,即由轮询模式又回到了中断模式,可见收包处理可以根据网络报文进入系统的快慢动态的在中断和轮询两种方式智能切换。 In one round of polling, if the number of processed packets is less than the budget, it means that the network packets enter the system slowly at this time. It may be that there is no data in the BMAN buffer after the packet receiving callback has been processed many times, then Re-enable the packet receiving interrupt, that is, the polling mode returns to the interrupt mode. It can be seen that the packet receiving process can be intelligently switched between the interrupt and polling methods according to the speed of network packets entering the system.
步骤2.3,睡眠在环形缓冲区驱动定义的等待队列上的用户态收包进程由步骤2.2唤醒后,根据报文描述符中的信息直接访问BMAN内存,将以太网头部,VLAN头部,IP头部以及UDP头部等信息略过,将GTPU报文再组成消息递交给其它业务模块处理。 Step 2.3, after the user-mode packet receiving process sleeping on the waiting queue defined by the ring buffer driver is awakened by step 2.2, it directly accesses the BMAN memory according to the information in the message descriptor, and transfers the Ethernet header, VLAN header, IP Information such as the header and UDP header is skipped, and the GTPU message is recomposed into a message and submitted to other business modules for processing.
用户态收包进程被唤醒后,根据环形缓冲区中当前读指针所指向的环形缓冲区单元格中填写的报文地址偏移以及报文大小,就可以无拷贝的访问BMAN缓冲区中对应的GTPU报文了。实施例中,用户态收包进程启动后,会睡眠在环形缓冲区驱动定义的等待队列上,由步骤2.2唤醒后,由于已经将BMAN缓冲区映射到了自己的用户地址空间,故可以根据报文描述符中的信息(描述符中包含了报文的物理地址偏移以及报文大小等信息)直接访问BMAN缓冲区根据组网模型的不同,如是否有VLAN标签,IP版本是IPv4或者IPv6,用户程序需要将以太网头部,VLAN头部,IP头部以及UDP头部等信息略过,将GTPU报文再组成消息递交给其它业务模块处理。 After the user state packet receiving process is woken up, according to the packet address offset and packet size filled in the ring buffer cell pointed by the current read pointer in the ring buffer, you can access the corresponding data in the BMAN buffer without copying. The GTPU message has been sent. In the embodiment, after the user state packet receiving process starts, it will sleep on the waiting queue defined by the ring buffer driver. After being woken up by step 2.2, since the BMAN buffer has been mapped to its own user address space, it can be processed according to the message. The information in the descriptor (the descriptor contains information such as the physical address offset of the message and the size of the message) directly accesses the BMAN buffer. Depending on the networking model, such as whether there is a VLAN tag, the IP version is IPv4 or IPv6, The user program needs to skip the Ethernet header, VLAN header, IP header, UDP header and other information, and then recompose the GTPU message and submit it to other business modules for processing.
由于PowerPC4080的包分类器的限制,当报文发送方发送的GTPU报文为IP分片时,由于只有第一个IP分片含有UDP首部,而分类器的分类规则实际上是简单判断一个IP报文的UDP目的端口号,故会将其它IP分片递交给Linux网络协议栈处理,但由于第一个分片没有递交给协议栈,故协议栈的IP重组会失败。实施中,为避免此类问题的发生,可将此类报文归为非GTPU报文,PowerPC系列更高端的CPU中的包分类器没有此类问题。 Due to the limitation of the packet classifier of PowerPC4080, when the GTPU message sent by the message sender is an IP fragment, only the first IP fragment contains the UDP header, and the classification rule of the classifier is actually to simply judge an IP The UDP destination port number of the message, so other IP fragments will be submitted to the Linux network protocol stack for processing, but because the first fragment is not submitted to the protocol stack, the IP reassembly of the protocol stack will fail. In implementation, in order to avoid such problems, such packets can be classified as non-GTPU packets, and the packet classifiers in higher-end CPUs of the PowerPC series do not have such problems.
本文中所描述的具体实施例仅仅是对本发明精神作举例说明。本发明所属技术领域的技术人员可以对所描述的具体实施例做各种各样的修改或补充或采用类似的方式替代,但并不会偏离本发明的精神或者超越所附权利要求书所定义的范围。 The specific embodiments described herein are merely illustrative of the spirit of the invention. Those skilled in the art to which the present invention belongs can make various modifications or supplements to the described specific embodiments or adopt similar methods to replace them, but they will not deviate from the spirit of the present invention or go beyond the definition of the appended claims range.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310315568.8A CN103391256B (en) | 2013-07-25 | 2013-07-25 | A kind of base station user face data processing optimization method based on linux system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310315568.8A CN103391256B (en) | 2013-07-25 | 2013-07-25 | A kind of base station user face data processing optimization method based on linux system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103391256A CN103391256A (en) | 2013-11-13 |
CN103391256B true CN103391256B (en) | 2016-01-13 |
Family
ID=49535417
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310315568.8A Expired - Fee Related CN103391256B (en) | 2013-07-25 | 2013-07-25 | A kind of base station user face data processing optimization method based on linux system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103391256B (en) |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104731711A (en) * | 2013-12-23 | 2015-06-24 | 中兴通讯股份有限公司 | Table filling method and device of network equipment |
CN104811391B (en) * | 2014-01-24 | 2020-04-21 | 中兴通讯股份有限公司 | Data packet processing method and device and server |
CN103945456B (en) * | 2014-05-12 | 2017-06-27 | 武汉邮电科学研究院 | A kind of efficient UDP message of LTE base station user plane based on linux system sends optimization method |
CN105830038B (en) * | 2014-06-30 | 2019-03-05 | 华为技术有限公司 | A kind of method and host of access storage equipment |
CN104102494B (en) * | 2014-07-31 | 2017-07-25 | 武汉邮电科学研究院 | Air interface data cipher acceleration method of wireless communication base station |
CN105635045B (en) * | 2014-10-28 | 2019-12-13 | 北京启明星辰信息安全技术有限公司 | Tcpdump packet capture implementation method and device based on drive zero copy mode system |
CN105873038A (en) * | 2016-06-07 | 2016-08-17 | 武汉邮电科学研究院 | Method for safely processing LTE (Long Term Evolution) base station user plane data |
CN106161110B (en) * | 2016-08-31 | 2019-05-17 | 东软集团股份有限公司 | Data processing method and system in a kind of network equipment |
CN106411778B (en) * | 2016-10-27 | 2019-07-19 | 东软集团股份有限公司 | The method and device of data forwarding |
CN106844242B (en) * | 2016-12-30 | 2019-07-05 | 中国移动通信集团江苏有限公司 | A kind of method for interchanging data and system |
CN107086948B (en) * | 2017-04-14 | 2019-11-12 | 重庆邮电大学 | A data processing method for improving virtualized network performance under SDWN |
CN107809366B (en) * | 2017-10-27 | 2020-10-20 | 浙江宇视科技有限公司 | A method and system for safe sharing of UNP tunnels |
CN107908365A (en) * | 2017-11-14 | 2018-04-13 | 郑州云海信息技术有限公司 | The method, apparatus and equipment of User space memory system data interaction |
CN109936502B (en) * | 2017-12-15 | 2022-05-17 | 迈普通信技术股份有限公司 | Data receiving method and data transmission equipment |
CN109120665B (en) * | 2018-06-20 | 2020-05-29 | 中国科学院信息工程研究所 | High-speed data packet collection method and device |
CN110134439B (en) * | 2019-03-30 | 2021-09-28 | 北京百卓网络技术有限公司 | Lock-free data structure construction method and data writing and reading methods |
CN110167197B (en) * | 2019-04-16 | 2021-01-26 | 中信科移动通信技术有限公司 | GTP downlink data transmission optimization method and device |
CN110083311B (en) * | 2019-04-26 | 2022-03-29 | 深圳忆联信息系统有限公司 | SSD descriptor-based software and hardware interaction issuing method and system |
CN110138797B (en) * | 2019-05-27 | 2021-12-14 | 北京知道创宇信息技术股份有限公司 | Message processing method and device |
CN110602225A (en) * | 2019-09-19 | 2019-12-20 | 北京天地和兴科技有限公司 | Efficient packet receiving and sending method of linux system suitable for industrial control environment |
CN111211942A (en) * | 2020-01-03 | 2020-05-29 | 山东超越数控电子股份有限公司 | Data packet receiving and transmitting method, equipment and medium |
CN114090275B (en) * | 2020-08-05 | 2025-03-11 | 中国移动通信集团广东有限公司 | Data processing method, device and electronic equipment |
CN112491979B (en) * | 2020-11-12 | 2022-12-02 | 苏州浪潮智能科技有限公司 | Network card data packet cache management method, device, terminal and storage medium |
CN113596171B (en) * | 2021-08-04 | 2024-02-20 | 杭州网易数之帆科技有限公司 | Cloud computing data interaction method, system, electronic equipment and storage medium |
CN116600023B (en) * | 2023-05-04 | 2025-08-19 | 南京蓝昊智能科技有限公司 | Network driving system and method for IgH |
CN117579386B (en) * | 2024-01-16 | 2024-04-12 | 麒麟软件有限公司 | Network traffic safety control method, device and storage medium |
CN118939279B (en) * | 2024-10-14 | 2025-01-10 | 四川省华存智谷科技有限责任公司 | A method for optimizing user-mode file system framework FUSE |
CN120321213A (en) * | 2025-06-05 | 2025-07-15 | 阿里云计算有限公司 | Data processing method, device, storage medium and program product |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101594307A (en) * | 2009-06-30 | 2009-12-02 | 中兴通讯股份有限公司 | Scheduling method and system based on multi-level queue |
US7953002B2 (en) * | 2005-11-10 | 2011-05-31 | Broadcom Corporation | Buffer management and flow control mechanism including packet-based dynamic thresholding |
CN102740369A (en) * | 2011-03-31 | 2012-10-17 | 北京新岸线无线技术有限公司 | Data processing method, device and system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7787463B2 (en) * | 2006-01-26 | 2010-08-31 | Broadcom Corporation | Content aware apparatus and method |
-
2013
- 2013-07-25 CN CN201310315568.8A patent/CN103391256B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7953002B2 (en) * | 2005-11-10 | 2011-05-31 | Broadcom Corporation | Buffer management and flow control mechanism including packet-based dynamic thresholding |
CN101594307A (en) * | 2009-06-30 | 2009-12-02 | 中兴通讯股份有限公司 | Scheduling method and system based on multi-level queue |
CN102740369A (en) * | 2011-03-31 | 2012-10-17 | 北京新岸线无线技术有限公司 | Data processing method, device and system |
Non-Patent Citations (1)
Title |
---|
基于多核处理器的普适性报文捕获技术研究;孙江;《中国优秀硕士学位论文全文数据库信息科技辑》;20120731;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN103391256A (en) | 2013-11-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103391256B (en) | A kind of base station user face data processing optimization method based on linux system | |
CN103945456B (en) | A kind of efficient UDP message of LTE base station user plane based on linux system sends optimization method | |
CN112084136B (en) | Queue cache management method, system, storage medium, computer equipment and application | |
CN108616458B (en) | System and method for scheduling packet transmissions on a client device | |
US12301476B2 (en) | Resource consumption control | |
CN103348641B (en) | The system that the multiple cell improved in single modem boards is supported | |
CN118433113A (en) | Receiver-based sophisticated congestion control | |
WO2018107681A1 (en) | Processing method, device, and computer storage medium for queue operation | |
CN101635682B (en) | Storage management method and storage management system | |
US20220078119A1 (en) | Network interface device with flow control capability | |
WO2016179968A1 (en) | Queue management method and device, and storage medium | |
CN113032295B (en) | Method, system and application for caching data packet in second level | |
WO2012162949A1 (en) | Packet reassembly and resequence method, apparatus and system | |
CN101594302A (en) | Method and device for dequeuing data | |
CN104158770B (en) | Method and device for splitting and reassembling data packets of a switch | |
CN103731368A (en) | Method and device for processing message | |
CN102957629B (en) | Method and device for queue management | |
CN106130930A (en) | A kind of Frame in advance join the team process device and method | |
WO2023116340A1 (en) | Data message forwarding method and apparatus | |
CN103297350B (en) | Implementing method and switching equipment of cell switching system | |
CN117097679A (en) | Aggregation method and device for network interruption and network communication equipment | |
CN104102494B (en) | Air interface data cipher acceleration method of wireless communication base station | |
EP3535956B1 (en) | Methods and systems for data transmission | |
CN101964751A (en) | Transmission method and device of data packets | |
CN111831403A (en) | A business processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder |
Address after: 430074, No. 88, postal academy road, Hongshan District, Hubei, Wuhan Patentee after: WUHAN POST AND TELECOMMUNICATIONS RESEARCH INSTITUTE Co.,Ltd. Address before: 430074, No. 88, postal academy road, Hongshan District, Hubei, Wuhan Patentee before: WUHAN Research Institute OF POSTS AND TELECOMMUNICATIONS |
|
CP01 | Change in the name or title of a patent holder | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160113 |
|
CF01 | Termination of patent right due to non-payment of annual fee |