[go: up one dir, main page]

CN115066844A - Dynamic uplink end-to-end data transmission scheme with optimized memory path - Google Patents

Dynamic uplink end-to-end data transmission scheme with optimized memory path Download PDF

Info

Publication number
CN115066844A
CN115066844A CN202080094295.7A CN202080094295A CN115066844A CN 115066844 A CN115066844 A CN 115066844A CN 202080094295 A CN202080094295 A CN 202080094295A CN 115066844 A CN115066844 A CN 115066844A
Authority
CN
China
Prior art keywords
data
packet
memory
window
internal memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080094295.7A
Other languages
Chinese (zh)
Inventor
刘素琳
杨鸿魁
马天安
H·洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zeku Technology Shanghai Corp Ltd
Original Assignee
Zheku Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zheku Technology Co ltd filed Critical Zheku Technology Co ltd
Publication of CN115066844A publication Critical patent/CN115066844A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Embodiments of the apparatus and method for memory processing may be applicable to communication systems, such as wireless communication systems. In an example, an apparatus for memory processing may comprise: an external memory configured to store layer three (L3) data; and an internal memory configured to store layer two (L2) data. The apparatus may also include circuitry. The circuitry is configured to: processing a header of a packet and moving the header from the external memory to the internal memory; processing a remaining portion of the packet upon determining that at least two predetermined conditions are satisfied, and transferring the remaining portion of the packet from the external memory to the internal memory.

Description

Dynamic uplink end-to-end data transmission scheme with optimized memory path
Cross Reference to Related Applications
This application is related to and claims priority from U.S. provisional patent application No. 62/966,686 filed on 28.1.2020, hereby incorporated by reference in its entirety.
Background
Embodiments of the present disclosure relate to an apparatus and method for memory processing, which is applicable to a communication system, for example, a wireless communication system.
Communication systems, such as wireless communication systems, are widely deployed to provide various telecommunication services such as telephony, video, data, messaging, and broadcasting. When a packet is to be transmitted over a medium, such as over the air in the case of wireless communications, a modem having a protocol stack embodied in hardware and software may pass the packet through a protocol stack having a physical layer (including a Radio Frequency (RF) module), ultimately converting bits of the packet into radio waves.
Disclosure of Invention
Embodiments of an apparatus and method for memory processing are disclosed herein.
In one example, an apparatus for memory processing may include an external memory configured to store layer three (L3) data. The apparatus may also include an internal memory configured to store layer two (L2) data. The apparatus may also include circuitry. The circuitry is configured to: processing a header of a packet and moving the header from the external memory to the internal memory; processing the remainder of the packet upon determining that at least two predetermined conditions are satisfied; and transferring the remaining portion of the packet from the external memory to the internal memory. The at least two predetermined conditions may include that space in the internal memory is available and that a Medium Access Control (MAC) layer is ready to prepare data for a next transmission window.
In another example, an apparatus for memory processing may include an external memory configured to store L3 data and an internal memory configured to store L2 data. The apparatus may also include circuitry configured to maintain L3 data according to at least one first window and L2 data according to at least one second window shorter than the first window.
In another example, a method for memory processing may include processing, by circuitry, a header of a packet and moving the header from an external memory configured to store L3 data to an internal memory configured to store L2 data. The method may also include processing, by the circuitry, a remaining portion of the packet upon determining that at least two predetermined conditions are satisfied. The method may also include transferring, by the circuitry, the remaining portion of the packet from the external memory to the internal memory. The at least two predetermined conditions may include that space in the internal memory is available and that the MAC layer is ready to prepare data for the next transmission window.
In yet another example, a method for memory processing may include maintaining, by circuitry, L3 data according to at least one first window, wherein the L3 data is stored in an external memory. The method may also include maintaining, by the circuitry, L2 data according to at least one second window that is shorter than the first window, wherein the L2 data is stored in the internal memory.
In yet another example, a non-transitory computer-readable medium may encode instructions that, when executed by a microcontroller of a node, may perform a process for memory processing. The process may include any of the methods described above.
Drawings
The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate embodiments of the present disclosure and, together with the detailed description, further serve to explain the principles of the disclosure and to enable a person skilled in the pertinent art to make and use the disclosure.
Fig. 1 illustrates data processing in a protocol stack according to some embodiments of the present disclosure.
Fig. 2 shows a data flow diagram illustrating some embodiments of the present disclosure.
Fig. 3A and 3B illustrate internal memory corresponding to the data flow diagram of fig. 2 in some embodiments of the present disclosure.
Fig. 4A illustrates a method according to some embodiments of the present disclosure.
Fig. 4B illustrates another method according to some embodiments of the present disclosure.
Fig. 5 illustrates a detailed block diagram of a baseband system on a chip (SoC) implementing layer 2 packet processing using layer 2 circuitry and a Microcontroller (MCU), according to some embodiments of the present disclosure.
Fig. 6 illustrates an example wireless network in which some aspects of the present disclosure may be implemented, which may incorporate memory processing, in accordance with some embodiments of the present disclosure.
FIG. 7 illustrates a node that may be used for memory processing, according to some embodiments of the present disclosure.
Detailed Description
While specific configurations and arrangements are discussed, it should be understood that this is done for illustrative purposes only. Those skilled in the art will recognize that other configurations and arrangements may be used without departing from the spirit and scope of the present disclosure. It will be apparent to those skilled in the relevant art that the present disclosure may also be used in a variety of other applications.
It is noted that references in the specification to "one embodiment," "an example embodiment," "some embodiments," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the relevant art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
In general, terms may be understood at least in part from the context of their usage. For example, the term "one or more" as used herein may be used to describe any feature, structure, or characteristic in the singular or may be used to describe a combination of features, structures, or characteristics in the plural, depending, at least in part, on the context. Similarly, terms such as "a" or "the" may also be understood to refer to a singular use or to a plural use, depending at least in part on the context. Moreover, the term "based on" may be understood to not necessarily be meant to represent a dedicated set of factors, but may instead allow for the presence of some additional factors that are not necessarily explicitly described, depending at least in part on the context.
Various aspects of a wireless communication system will now be described with reference to various apparatus and methods. These apparatus and methods are described in the following detailed description and illustrated in the accompanying drawings by various blocks, modules, units, components, circuits, steps, operations, procedures, algorithms, etc. (collectively referred to as "elements"). These elements may be implemented using electronic hardware, firmware, computer software, or any combination thereof. Whether such elements are implemented as hardware, firmware, or software depends upon the particular application and design constraints imposed on the overall system.
The techniques described herein may be used for various wireless communication networks such as Code Division Multiple Access (CDMA) systems, Time Division Multiple Access (TDMA) systems, Frequency Division Multiple Access (FDMA) systems, Orthogonal Frequency Division Multiple Access (OFDMA) systems, single carrier frequency division multiple access (SC-FDMA) systems, and other networks. The terms "network" and "system" are often used interchangeably. The CDMA network may implement a Radio Access Technology (RAT), such as Universal Terrestrial Radio Access (UTRA), evolved UTRA (E-UTRA), CDMA 2000, and so on. A TDMA network may implement a RAT such as GSM. The OFDMA network may implement a RAT, such as Long Term Evolution (LTE) or New Radio (NR). The techniques and systems described herein may be used for the wireless networks and RATs described above as well as other wireless networks and RATs. Likewise, the techniques and systems described herein may also be applied to wired networks, such as fiber optic, coaxial, or twisted pair based networks or satellite networks.
Some embodiments of the present disclosure relate to mechanisms to manage memory and processing as packets traverse protocol layers. Some embodiments also relate to minimal internal memory for transmission and retransmission purposes of such packets. In addition, some embodiments relate to efficient management of retransmission data storage.
In a communication device, such as a radio modem used in User Equipment (UE) or other terminal equipment of a fifth generation (5G) communication system, L3 packet data to be transmitted from the communication device is stored in an external memory. The external memory may be shared by multiple components within the modem, or may be shared by other components of the UE outside of the modem. During L3 IP header processing, L3 packet data may be moved into an internal memory, which may also be referred to as a local memory. For example, internal memory may be local to a given system-on-chip, as opposed to external memory, which may be on another chip of the same device. After the L3 IP header processing, the L3 packet data is stored back in the external memory again.
The trigger is then sent to the PDCP layer to process the L3 packets one function at a time. These functions may include robust header compression (ROHC), integrity checking, and encryption. At processing time, the L3 packet may be saved to external memory or internal memory for subsequent steps in the processing chain.
The PDCP L2 packet is then queued into a logical channel queue for further processing. The RLC layer then classifies the data into various RLC queues in the LC.
Finally, the MAC layer retrieves the L2 data from the LC queue and moves the L2 data to internal memory for transmission to the PHY layer.
The above-described method of processing packet data may result in inefficient data movement of the packet from the L3 to multiple PDCP layer functions and then to the RLC and MAC layers. The above method relies on multiple external memory accesses for reading and writing. In addition, a large capacity of external memory and a large capacity of internal memory are required. A relatively large amount of power may be used in view of the large capacity of the memory and the large amount of access to the memory.
Some embodiments may have various benefits and/or advantages with respect to various technical aspects. For example, some embodiments of the present disclosure provide a method of reducing a data transmission path through memory in a UL ETE data path. Some embodiments still ensure that the packet traverses all of the multiple data plane layers required to process the incoming L3 packet. In addition, some embodiments minimize data access to external memory, thereby saving power. Furthermore, some embodiments minimize the size of the internal memory space, even though the internal memory may provide fast performance at higher power and area costs.
Some embodiments of the present disclosure relate to an efficient memory path method for dynamic transmission of 5G Uplink (UL) packets for data transmission that allows for minimal data movement, optimized external memory access, and small capacity internal memory for high throughput and low latency packets.
The challenge in the UL ETE data path is to find the minimum data transmission path through memory, which is necessary to traverse all of the multiple data plane layers to process the incoming L3 packets and minimize data access to external memory to save power.
Furthermore, it may be beneficial to minimize the size of the internal memory space. Internal memory space may provide fast performance, but at the expense of higher power and area costs. The internal memory 514 in fig. 5 is an example of an internal memory, which is different from the external memory 506 in fig. 5. External memory 506 may be shared by multiple components of the system, including components not shown in fig. 5. In contrast, the internal memory 514 in fig. 5 may be configured to be used exclusively by the baseband chip of the modem of the user equipment implementing the system shown in fig. 5. The baseband chip may include RF components, or the RF chip may be provided as a physically separate element.
Some embodiments relate to an efficient memory path method for dynamic transmission of fifth generation (5G) Uplink (UL) packets for data transmission. Some embodiments may allow for minimal data movement, may have optimized external memory access, and may rely on a small capacity internal memory for high throughput and low latency packets.
Some aspects of the description of some embodiments of the present disclosure discuss hardware aspects and software aspects. In some cases, a hardware aspect may refer to an aspect performed by dedicated hardware, such as a hardware-based protocol stack implementation. Fig. 5, discussed below, provides a specific example of a hardware-based protocol stack implementation with multiple application-specific integrated circuits (e.g., application-specific integrated circuits (ASICs)) that handle different layers of the protocol stack. On the other hand, software aspects may refer to aspects that may be executed by a general-purpose processor or by a layer-independent dedicated modem processor. Fig. 5 shows a specific example in which software aspects may be implemented on a microcontroller.
Some embodiments may rely on three different and potentially independent principles that may be used together in one aspect of some embodiments. According to a first principle, some embodiments move data from layer three (L3) external memory to layer two (L2) internal memory only around the transmission time frame.
According to a second principle, some embodiments perform Packet Data Convergence Protocol (PDCP) processing while performing data movement from the L3 external memory to the L2 internal memory.
According to a third principle, some embodiments prepare the intended Medium Access Control (MAC) Protocol Data Unit (PDU) packet directly in the appropriate location of the L2 internal memory. This preparation may involve prioritizing L2 packet data moves from L3 external memory to L2 internal memory and connecting these moves.
Each of these identified principles may be used together or differently. These three principles may be considered as principles of the first aspect of some embodiments as described above. This aspect may be referred to as optimized data movement from external memory to internal memory.
Some embodiments may rely on two different and potentially independent principles that may be used together in a second aspect of some embodiments. According to a first principle, a reduced transmission window (TXWIN) buffer may be used for prioritized L2 MAC data storage in minimal internal memory. The reduced TXWIN buffer may be used for fast transmission around a transmission time frame.
According to a second principle, a reduced retransmission window (RETXWIN) buffer may be used for L2 MAC data storage in a minimum internal memory. The reduced RETXWIN buffer may be used for fast hybrid automatic repeat request (HARQ) retransmissions close to the transmission time frame.
The first principle and the second principle may be implemented together, for example, to help further reduce local data storage requirements. Thus, this second aspect can be considered as a minimum internal memory for fast UL transmissions and retransmissions.
A third aspect of some embodiments may relate to efficient management of storage of retransmission data. This third aspect may involve three principles, which may be used independently or together.
According to the first aspect, HARQ retransmission data (if any) can be retrieved from a small capacity and fast internal memory. One of the details may be the length of time the HARQ retransmission data remains in the small capacity and fast internal memory. The length of time may be configured in advance or may in practice dynamically change over time based on the HARQ usage of the device. For example, a device in a relatively noisy or otherwise interfering scenario may need to use HARQ more often than in a relatively clear scenario.
According to a second aspect, if retransmission data is requested or otherwise required, but is not currently available in the internal memory, the retransmission data may be retrieved from the external memory. This may be because the retention time in the internal memory may be long enough to handle the vast majority of HARQ retransmissions. However, sometimes retransmission requests may arrive outside the hold time. As described above, the hold time for the internal memory may be configured to capture a certain predicted percentage of retransmission requests, such as 97% retransmission requests, 99% retransmission requests, or 99.9% retransmission requests. Other percentages may also be targeted: the foregoing is merely an example.
According to a third aspect, all L3 data packets may be stored in the external memory until the predetermined time expires. The predetermined time may be an L2 discard window or a PDCP discard window. If there are multiple drop windows available, the external memory may wait until the last drop window expires. The window may be based on the need to perform link recovery. Therefore, the discard window may expire when the RLC layer or the PDCP layer has completed link recovery.
Figure 1 illustrates data processing in a protocol stack according to some embodiments. For example, the protocol stack may be implemented in a modem or similar device. As shown in fig. 1, in a 5G cellular radio modem, a packet data protocol stack is composed of a modem layer 3IP layer, a Packet Data Convergence Protocol (PDCP) layer, a Radio Link Control (RLC) layer, and a Medium Access Control (MAC) layer. Each layer is responsible for handling user plane packet data in the form of IP data or raw user data and ensures safe, on-time and error-free data transmission.
In the UL end-to-end (ETE) data path shown in fig. 1, L3 data goes through multiple layers of processing before being finally transmitted to the MAC layer and the PHY layer.
First, for example, packets may be processed through an L3 layer Internet Protocol (IP) header and quality of service (QOS) flow, and may be queued in an L3 buffer. The packet may then be processed through PDCP, which may include ROHC compression, integrity check, and ciphering. PDCP packet data may be queued in an L2 buffer ordered in a Logical Channel (LC). Then, at the RLC layer, the RLC queues may be ordered in priority bins according to data type (retransmission, new data, status, segment). Finally, at the MAC layer, data packets from different LCs may be collected according to the priorities of the Logical Channel Priority (LCP) procedures specified in the 3GPP standards. Similar methods may be used for other communication standards.
Some embodiments of the present disclosure provide a method of reducing a data transmission path through memory in a UL ETE data path. Some embodiments still ensure that the packet traverses all of the multiple data plane layers required to process the incoming L3 packet. In addition, some embodiments minimize data access to external memory, thereby saving power. Furthermore, some embodiments minimize the size of the internal memory space, even though the internal memory may provide fast performance at higher power and area costs.
Fig. 2 shows a data flow diagram illustrating some embodiments of the present disclosure. Fig. 3A and 3B illustrate internal memory corresponding to the data flow diagram of fig. 2 in some embodiments. As shown in fig. 2, at 210A, an Application (AP) or host may send L3 TCP/IP packets to the modem data stack of system 200. A data buffer is allocated from an external memory and stored with the incoming IP packet. These may broadly be part of the L3 data window.
At 220B, the IP header may be processed and moved to L2 internal memory. Since the IP header may need to be efficiently processed for QoS flow identification and ordering/filtering, the IP header may be placed in fast internal memory first, i.e., before the rest of the packet. Although not specifically shown in fig. 3A, these IP headers may be part of an IP packet (e.g., the current transport packet 310 or any other packet in TXWIN 320) to which the remainder of the IP packet is added after 230C.
In fig. 2, an external memory such as an L3 buffer (external) 202 is shown operably coupled to Digital Processing (DP) hardware 204, which in turn is shown operably coupled to an internal memory such as an L2+ HARQ buffer (local/internal) 206. The L2+ HARQ buffer (local/internal) 206 is shown operatively connected to the physical layer (PHY) 208. The physical layer (PHY)208 may be considered external to the DP hardware 204, but is part of the overall system 200. The DP software 212 may run on a microcontroller (e.g., MCU 510 in fig. 5) or another computing device.
As shown in fig. 2, at 230C, when L2 internal memory is available and the MAC is ready to prepare data for the next transmission window, the MAC may trigger the allocation of an L2 data buffer from the small capacity internal memory and may fetch the data from the L3 external memory.
This data retrieved from the external memory of L3 may be processed through PDCP, which may include ROHC, integrity check and ciphering, and the addition of both RLC and MAC headers.
Data prepared in the L2 internal memory may be placed in contiguous storage for fast streaming to the PHY layer at the transmission timeline.
When data is moved from the L3 external memory to the L2 internal memory, PDCP, MAC PDU preparation and prioritized placement into contiguous storage can be done. By doing this shift only once, the data shift can be optimized or otherwise efficiently or beneficially arranged for the next transmission window. As shown in fig. 3A, this movement into internal memory may occur at T0 to fill the group in TXWIN 320, including the currently transmitted packet 310. Thus, the current transport packet 310 may be loaded into a transport window (TXWIN)320 in L2 internal memory (which may be referred to as L2 Localmem). Meanwhile, the L3 data window 330, also referred to as the L3 data buffer 330, may include the same packet, or even more packets. The L3 data window 330 may be held in external memory in the L3 buffer (e.g., in the L3 buffer (external) in fig. 2 or in the external memory 506 in fig. 5). Thus, the L3 data buffer 330 may include all of the same packets of TXWIN 320, RETXWIN 340, or even more packets.
As shown in fig. 2, at 240D, an RRC signaling message or RLC command message may arrive. RRC signaling messages or RLC command messages may arrive at the L2 transmit queue at the same time. These messages may be distributed directly into the L2 data buffer. Although not explicitly shown in fig. 3A, these messages may be included in TXWIN 320.
At 250E, MAC PDU transmission and/or retransmission may occur. At each slot, the MAC can obtain an indication and grant from the BS to transmit the packet. This grant is shown in fig. 1 as a NW UL grant as an example. For new data, packets may be quickly retrieved from the TXWIN 320 buffer in the L2 internal memory ready with MAC data.
As shown at T0 in fig. 3A, for retransmission data, the retx window 340 buffer may be first scanned to retrieve hybrid automatic repeat request (HARQ) data, such as unacknowledged packet 350. If the data is outside the RETXWIN 340 window and/or has been overwritten/deleted (e.g., due to the limited size of RETXWIN 340), the L3 data may be accessed again from external memory. In this case, the retrieved data may be processed through the L3-L2 data path, where new L2local buffers may be allocated for these packets. For example, a previously sent packet at T0, and found only in the L3 data window (as shown at 375), may be added back into retx win 340 at 370, as shown at 360 at T1 in fig. 3B. Previously transmitted packets still within retx win 340 may be aggregated by, for example, moving to the left, as shown at 365.
As shown in fig. 2, at 260F, there may be refilling of the L2local buffer. These buffers may be saved into the internal memory RETXWIN region when data has been fetched from L2 internal memory for transmission to the PHY layer. This may be accomplished by moving bits from one region of internal memory to another region of internal memory. Another approach may be to redefine the region of internal memory that was previously part of the TXWIN region as a region in RETXWIN.
The old data may then be overwritten or deleted, freeing up space for incoming TXWIN and RETXWIN. Deletion herein may also include dereferencing bits without zeroing or otherwise altering the bits.
As described above, after PDCP processing, header addition, and prioritized MAC PDU creation, additional L3 data may be pulled into L2 internal memory. This is shown in fig. 3B, at T1, the transmission window and retransmission window have been moved one packet forward in the right direction, as indicated by the window movement direction arrow. This one packet adjustment is for illustration only. If multiple packets are sent simultaneously, the adjustment of the multiple packets can be done simultaneously. Also, although the directional arrows are to the right, this is for illustrative purposes only of a memory in which consecutive memory blocks are arranged in a left-to-right order. Other arrangements of memory are also permissible and are shown for illustrative and example purposes only.
Fig. 4A illustrates a method according to some embodiments. As shown in fig. 4A, a method 400 for memory processing may include, at 410, maintaining, by circuitry, layer three (L3) data according to at least one first window. The L3 data may be stored in an external memory. The method 400 may also include maintaining, by the circuit, layer two (L2) data according to at least one second window that is shorter than the first window, at 420. The L2 data may be stored in an internal memory. An illustration of this approach can be seen in fig. 3A and 3B, where the L3 data window is much larger than the windows TXWIN and RETXWIN for the L2 data.
The at least one second window may include a transmission window and a retransmission window, such as TXWIN 320 and RETXWIN 340 in fig. 3A and 3B. As shown by way of example in fig. 3A and 3B, the combination of the transmission window and the retransmission window may still be smaller than the at least one first window, e.g. the L3 data window.
As shown in fig. 4A, method 400 may further include determining a capacity of the internal memory for a plurality of media access control instances at 430. This determination of capacity may occur in conjunction with the previously described maintenance steps as shown, or may be performed separately from these steps. The determined capacity may take into account a number of parameters. For example, the plurality of parameters may include a number of logical channels, a data rate, a priority of the logical channels, a maximum bucket size of the logical channels, and a layer three buffer size of the logical channels.
In some embodiments, method 400 may further include scaling the size of each media access control instance based on a ratio of the maximum internal memory capacity to a total size of all media access control instances, at 440. This is explained in further detail above. For example, based on an initial calculation of the demand for each MAC instance, a situation may arise where the total demand of the instances exceeds the maximum available capacity of the internal memory. Thus, using a weighted fairness approach, each MAC instance can be allocated according to its own demand scaled by the ratio between the above total demand and the maximum available internal memory. Allowing otherwise limited internal memory to be handled.
The method of FIG. 4A may be performed with the architecture shown in FIG. 2 and the specific hardware shown in FIG. 5, and discussed in more detail below. For example, a microcontroller and/or Application Specific Integrated Circuit (ASIC) may be responsible for maintenance, capacity determination, and scaling as described above.
Fig. 4B illustrates another method according to some embodiments. As with fig. 4A, the method of fig. 4B may be implemented in circuitry, such as the hardware and associated software shown in fig. 2 and 5. The method of fig. 4B may be used with the method of fig. 4A so that both methods may be harmonically implemented simultaneously in the same modem of the same user equipment. Other implementations are possible, for example, the two methods are practiced separately from each other.
As shown in fig. 4B, a method 405 for memory processing may include processing, by circuitry, a header of a packet at 415 and moving the header from an external memory configured to store layer three (L3) data to an internal memory configured to store layer two (L2) data. This is similarly shown at 220B as previously described.
As shown in fig. 4B, the method 405 may further include processing, by the circuitry, the remainder of the packet upon determining that at least two predetermined conditions are satisfied at 425. This is shown at 230B and 240D in fig. 2 as described above. The remainder of the packet may be all but the packet header processed separately at 220B and 415. The determination of whether the predetermined condition is satisfied at 427 may be accomplished in various ways. In some embodiments, the at least two predetermined conditions may include that space in the internal memory is available and that the media access control is ready to prepare data for the next transmission window. This may be considered a timely preparation technique, where the remainder of the packet is only provided to L2 memory in time for transmission, thereby minimizing the time that the remainder of the packet is present in L2, and thus also minimizing the capacity requirements on L2 memory.
The processing of the remainder of the packet may include packet data convergence protocol processing including robust header compression, integrity checking, and encryption, as shown in fig. 2 and discussed above. The remainder of the packet may be further processed by adding a radio link control header and a medium access control header. The remainder of the packet may be placed in contiguous memory in internal memory as shown in fig. 3A and 3B. A contiguous region of storage may refer to a physical or logical arrangement of bits in memory. For example, a logical arrangement may be a physical address or order in which a controller of the memory accesses bits. When using a contiguous memory area, the system is able to extract a range of bits, rather than having to receive a large number of bit addresses or a large range of bits spread throughout the memory.
As shown in fig. 4B, the method 405 may further include transferring, by the circuitry, the remaining portion of the packet from the external memory to the internal memory at 432. As noted above, this is also shown at 230C in fig. 2.
As shown in fig. 4B, the method 405 may further include receiving a packet at 402 and storing the packet in an external memory before processing the header. This is further illustrated at 210A in fig. 2.
As shown in fig. 4B, the method 405 may also include passing the packet to a physical layer of an implementing device for transmission. As noted above, this is also shown at 250E in fig. 2.
The internal memory used in the method 405 may include a transmission window buffer and a retransmission window buffer, such as TXWIN 320 and RETXWIN 340 shown in fig. 3A and 3B. At 435, the method 405 can further include moving the packet to a retransmission window buffer at 437 as the packet is delivered from the transmission window buffer to the physical layer. This movement is also illustrated by the change in window range between T0 and T1 in fig. 3A and 3B.
Upon passing the packet from the transmission window buffer to the physical layer, at 404, the method 405 may further include feeding additional layer three data from the external memory to the internal memory. Method 405 may then continue as described above starting at 415.
Fig. 5 illustrates a detailed block diagram of a baseband SoC 502 implementing layer 2 packet processing using layer 2 circuitry 508 and a Microcontroller (MCU)510, according to some embodiments of the present disclosure. Fig. 5 may be viewed as a specific implementation and example of the architecture shown in fig. 2, but other implementations are also permitted, including more or less hardware dependent.
As shown in fig. 5, the baseband SoC 502 may be an example of a software and hardware interworking system, where software functions are implemented by the MCU 510 and hardware functions are implemented by the layer 2 circuit 508. MCU 510 may be one example of a microcontroller and layer 2 circuit 508 may be one example of an integrated circuit, although other microcontrollers and integrated circuits are also permitted. In some embodiments, the layer 2 circuitry 508 includes SDAP circuitry 520, PDCP circuitry 522, RLC circuitry 524, and MAC circuitry 526. Application specific Integrated Circuits (ICs) controlled by MCU 510, such as SDAP circuit 520, PDCP circuit 522, RLC circuit 524, and MAC circuit 526, may be used for layer 2 packet processing. In some embodiments, the SDAP circuit 520, the PDCP circuit 522, the RLC circuit 524, and the MAC circuit 526 are each ICs dedicated to performing the functions of the respective layers in the layer 2 user plane and/or the control plane. For example, the SDAP circuitry 520, the PDCP circuitry 522, the RLC circuitry 524, and the MAC circuitry 526 can each be an ASIC that can be customized for a particular use, rather than intended for a general purpose use. Some ASICs may have high speed, small chip size, and low power consumption compared to general purpose processors.
As shown in fig. 5, the baseband SoC 502 may be operably coupled to the host processor 504 and the external memory 506 through the main bus 538. For uplink communications, a host processor 504, such as an Application Processor (AP), may generate raw data that has not been encoded and modulated by the PHY layer of the baseband SoC 502. Similarly, for downlink communications, host processor 504 may receive the data after it is initially decoded and demodulated by the PHY layer and then processed by layer 2 circuitry 508. In some embodiments, the raw data is formatted into data packets according to any suitable protocol, such as Internet Protocol (IP) data packets. External memory 506 may be shared by host processor 504 and baseband SoC 502 or any other suitable component.
In some embodiments, the external memory 506 stores raw data (e.g., IP data packets) to be processed by the layer 2 circuitry 508 of the baseband SoC 502 and stores data to be accessed by the layer 1 (e.g., PHY layer) to be processed by the layer 2 circuitry 508 (e.g., MAC PDUs). In a downlink stream from the user equipment, the situation may be reversed, where the external memory 506 may store data received from the PHY layer and data output from the layer 2 circuitry 508 after header removal and other tasks. The external memory 506 may or may optionally store no intermediate data of the layer 2 circuitry 508, e.g., PDCP PDU/RLC SDU or RLC PDU/MAC SDU. For example, the layer 2 circuitry 508 may modify data stored in the external memory 506.
As shown in fig. 5, the baseband SoC 502 may also include Direct Memory Access (DMA)516, which DMA 516 may allow some of the layer 2 circuits 508 to Access the external Memory 506 directly independent of the host processor 504. The DMA 516 may include a DMA controller and any other suitable input/output (I/O) circuitry. As shown in fig. 5, the baseband SoC 502 may further include an internal memory 514, e.g., an on-chip memory on the baseband SoC 502, the internal memory 514 being distinct from the external memory 506, the external memory 506 being an off-chip memory not on the baseband SoC 502. In some embodiments, internal memory 514 includes one or more L1, L2, L3, or L4 caches. The layer 2 circuitry 508 may also access the internal memory 514 through the main bus 538. Thus, internal memory 514 may be specific to baseband SoC 502, as opposed to other subcomponents or components implementing the system.
As shown in fig. 5, the baseband SoC 502 may also include a memory 512, the memory 512 being shared by the layer 2 circuitry 508 and the MCU 510 (e.g., accessible by both the layer 2 circuitry 508 and the MCU 510). It should be appreciated that although the memory 512 is illustrated as a separate memory from the internal memory 514, in some examples the memory 512 and the internal memory 514 may be local partitions of the same physical memory structure (e.g., Static Random Access Memory (SRAM)). In one example, a logical partition in internal memory 514 may be dedicated or dynamically allocated to layer 2 circuitry 508 and MCU 510 for exchanging commands and responses. In some embodiments, memory 512 includes a plurality of command queues 534, respectively, for storing multiple sets of commands and a plurality of response queues 536, respectively, for storing multiple sets of responses. Each pair of corresponding command queue 534 and response queue 536 may be dedicated to one of the plurality of layer 2 circuits 508.
As shown in fig. 5, baseband SoC 502 may also include a local bus 540. In some embodiments, MCU 510 may be operably coupled to memory 512 and main bus 538 through local bus 540. MCU 510 may be configured to generate multiple sets of control commands and write each set of commands to a respective command queue 534 in memory 512 via local bus 540 and interrupts. MCU 510 may also read sets of responses (e.g., processing result status) from multiple response queues 536 in memory 512 via local bus 540 and interrupts, respectively. In some embodiments, MCU 510 generates a set of commands based on a set of responses from a higher layer in a layer 2 protocol stack (e.g., a previous stage in layer 2 uplink data processing) or a lower layer in a layer 2 protocol stack (e.g., a previous stage in layer 2 downlink data processing). MCU 510 is operatively coupled to layer 2 circuitry 508 and controls the operation of layer 2 circuitry 508 to process layer 2 data through control commands in command queue 534 in memory 512. It should be understood that although one MCU 510 is shown in fig. 5, the number of MCUs is scalable such that multiple MCUs may be used in some examples. It should also be understood that, in some embodiments, memory 512 may be part of MCU 510, e.g., a cache integrated with MCU 510. It is to be further understood that, regardless of nomenclature, any suitable processing unit that can generate control commands to control the operation of the layer 2 circuitry 508 and check the response of the layer 2 circuitry 508 can be considered to be the MCU 510 disclosed herein.
The hardware and software interworking systems disclosed herein, such as system 200 in fig. 2 and baseband SoC 502 in fig. 5, may be implemented by any suitable node in a wireless network. For example, fig. 6 illustrates an example wireless network 600 in which some aspects of the disclosure may be implemented, according to some embodiments of the disclosure.
As shown in fig. 6, wireless network 600 may include a network of nodes, such as User Equipment (UE)602, access node 604, and core network element 606. The user device 602 may be any terminal device, such as a mobile phone, desktop computer, laptop, tablet, in-vehicle computer, game console, printer, positioning device, wearable electronic device, smart sensor, or any other device capable of receiving, processing, and sending information, such as any member of a vehicle-to-anything (V2X) network, a cluster network, a smart grid node, or an internet of things (IoT) node. It should be understood that the user device 602 is shown as a mobile telephone by way of illustration only and not by way of limitation.
The access node 604 may be a device that communicates with the user equipment 602, such as a wireless access point, a Base Station (BS), a node B, an enhanced node B (eNodeB or eNB), a next generation node B (gdnodeb or gNB), a cluster master node, and so on. The access node 604 may have a wired connection to the user device 602, a wireless connection to the user device 602, or any combination thereof. The access node 604 may be connected to the user equipment 602 through multiple connections, and the user equipment 602 may be connected to other access nodes in addition to the access node 604. The access node 604 may also be connected to other UEs. It should be understood that the access node 604 is shown by way of illustration, and not by way of limitation, by a radio tower.
The core network element 606 may serve the access node 604 and the user equipment 602 to provide core network services. Examples of the core network element 606 may include a Home Subscriber Server (HSS), a Mobility Management Entity (MME), a Serving Gateway (SGW), or a packet data network gateway (PGW). These are examples of core network elements of an Evolved Packet Core (EPC) system, which is the core network of an LTE system. Other core network elements may be used in LTE and other communication systems. In some embodiments, the core network element 606 comprises an access and mobility management function (AMF) device, a Session Management Function (SMF) device, or a User Plane Function (UPF) device for a core network of the NR system. It should be understood that the core network element 606 is shown illustratively, but not restrictively, as a collection of rack-mounted servers.
The core network element 606 may be connected to a large network, such as the internet 608, or another IP network to transport packet data over any distance. In this manner, data from the user device 602 may be communicated to other UEs connected to other access points, including, for example, a computer 610 connected to the internet 608 using a wired or wireless connection or a tablet 612 wirelessly connected to the internet 608 via a router 614. Thus, computer 610 and tablet 612 provide further examples of possible UEs, while router 614 provides an example of another possible access node.
A general example of a rack-mounted server is provided for illustration as a core network element 606. However, there may be multiple elements in the core network, including a database server (e.g., database 616) and a security and authentication server (e.g., authentication server 618). For example, database 616 may manage data related to a user's subscription to a network service. A Home Location Register (HLR) is an example of a standardized database of subscriber information for a cellular network. Likewise, authentication server 618 can handle authentication of users, sessions, and the like. In NR systems, an authentication server function (AUSF) device may be a specific entity that performs user equipment authentication. In some embodiments, a single server chassis may handle multiple such functions, such that the connections between the core network elements 606, the authentication server 618, and the database 616 may be local connections within the single chassis.
Although the above description uses uplink and downlink processing of packets in user equipment as an example in various discussions, similar techniques may be used for the other direction of processing as well as for processing in other devices such as access nodes and core network nodes. For example, any device that processes packets through multiple layers of a protocol stack may benefit from some embodiments of the present disclosure, even if not specifically listed above or specifically shown in the example network of fig. 6.
Each element of fig. 6 may be considered a node of wireless network 600. In the following description of node 700 in fig. 7, more details are provided, by way of example, regarding possible implementations of the node. The node 700 may be configured as the user equipment 602, the access node 604, or the core network element 606 in fig. 6. Similarly, node 700 may also be configured as computer 610, router 614, tablet 612, database 616, or authentication server 618 in fig. 6.
As shown in fig. 7, node 700 may include a processor 702, a memory 704, and a transceiver 706. These components are shown connected to each other via a bus 708, although other connection types are also permissible. When the node 700 is a user device 602, further components may be included, such as User Interfaces (UIs), sensors, etc. Similarly, when node 700 is configured as core network element 606, node 700 may be implemented as a blade in a server system. Other implementations are possible.
The transceiver 706 may include any suitable device for transmitting and/or receiving data. Node 700 may include one or more transceivers, but only one transceiver 706 is shown for simplicity of illustration. Antenna 710 is shown as a possible communication mechanism for node 700. Multiple antennas and/or antenna arrays may be utilized. Further, examples of the node 700 may communicate using wired techniques instead of, or in addition to, wireless techniques. For example, the access node 604 may communicate with the user equipment 602 in a wireless manner and may communicate with the core network element 606 over a wired connection (e.g., over an optical or coaxial cable). Other communication hardware, such as a Network Interface Card (NIC), may also be included.
As shown in fig. 7, node 700 may include a processor 702. Although only one processor is shown, it will be understood that multiple processors may be included. The processor 702 may include a microprocessor, microcontroller, DSP, ASIC, Field Programmable Gate Array (FPGA), Programmable Logic Device (PLD), state machine, gated logic, discrete hardware circuitry, and other suitable hardware configured to perform the various functions described throughout this disclosure. The processor 702 may be a hardware device having one or more processing cores. The processor 702 may execute software. Software should be construed broadly to mean instructions, instruction sets, code segments, program code, programs, subprograms, software modules, applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Software may include computer instructions written in an interpreted language, a compiled language, or machine code. Other techniques for indicating hardware are also permitted within the broad scope of software. Processor 702 may be a baseband chip, such as DP hardware 204 in FIG. 2 or SoC 502 in FIG. 5. The node 700 may also comprise other processors not shown, such as a central processing unit of a device, a graphics processor, etc. Processor 702 may include internal memory (not shown in fig. 7) that may be used as memory for L2 data, such as L2+ HARQ buffer (local/internal) 206 in fig. 2 or internal memory 514 in fig. 5. The processor 702 may include, for example, an RF chip integrated into a baseband chip, or the RF chip may be separately provided. Processor 702 may be configured to operate as, or may be an element or component of, a modem of node 700. Other arrangements and configurations are also permitted.
As shown in fig. 7, node 700 may also include a memory 704. Although only one memory is shown, it should be understood that multiple memories may be included. The storage 704 may broadly include both storage and memory. For example, memory 704 may include Random Access Memory (RAM), Read Only Memory (ROM), SRAM, dynamic RAM (dram), ferroelectric RAM (fram), electrically erasable programmable ROM (eeprom), CD-ROM or other optical disk storage, a Hard Disk Drive (HDD) (e.g., a magnetic disk storage or other magnetic storage device), a flash memory drive, a Solid State Drive (SSD), or any other medium that can be used to carry or store desired program code in the form of instructions that can be accessed and executed by processor 702. Broadly, the memory 704 can be embodied by any computer-readable medium, such as a non-transitory computer-readable medium. Memory 704 may be external memory 506 in fig. 5 or L3 buffer (external) 202 in fig. 2. Memory 704 may be shared by processor 702 and other components of node 700 (e.g., a graphics processor or central processing unit, not shown).
In various aspects of the disclosure, the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or encoded as instructions or code on a non-transitory computer-readable medium. Computer readable media includes computer storage media. A storage medium may be any available medium that can be accessed by a computing device, such as node 700 in fig. 7. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, HDD (e.g., magnetic disk storage or other magnetic storage devices), flash drives, SSDs, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a processing system (e.g., a mobile device or computer). Disk and disc, as used herein, includes CD, laser disc, optical disc, DVD and floppy disk wherein disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
According to an aspect of the present disclosure, an apparatus for memory processing may include an external memory configured to store tier three (L3) data and an internal memory configured to store tier two (L2) data. The apparatus may also include circuitry configured to process a header of a packet and move the header from the external memory to the internal memory, process a remaining portion of the packet upon determining that at least two predetermined conditions are satisfied, and transfer the remaining portion of the packet from the external memory to the internal memory.
In some embodiments, the circuitry may be further configured to receive the packet and store the packet in the external memory prior to processing the header.
In some embodiments, the circuitry may be further configured to pass the packet to a physical layer of the apparatus for transmission.
In some embodiments, the internal memory may include a transmission window buffer and a retransmission window buffer.
In some embodiments, the circuitry may be further configured to move the packet to the retransmission window buffer when passing the packet from the transmission window buffer to the physical layer.
In some embodiments, the circuitry may be configured to load additional L3 data from the external memory into the internal memory when passing the packet from the transmission window buffer to the physical layer.
In some embodiments, the remaining portion of the packet may be processed by packet data convergence protocol processing including robust header compression, integrity checking, and encryption.
In some embodiments, this remaining portion of the packet may be further processed by adding radio link control and medium access control headers.
In some embodiments, the remaining portion of the packet may be placed in a contiguous memory area in the internal memory.
In some embodiments, the at least two predetermined conditions may include that space in the internal memory is available and that the medium access control is ready to prepare data for a next transmission window.
According to another aspect, an apparatus for memory processing may include an external memory configured to store layer three (L3) data and an internal memory configured to store layer two (L2) data. The apparatus may also include circuitry configured to maintain L3 data according to at least one first window and L2 data according to at least one second window that is shorter than the first window.
In some embodiments, the at least one second window may comprise a transmission window and a retransmission window. The combination of the transmission window and the retransmission window may be smaller than the at least one first window.
In some embodiments, the circuitry may be further configured to determine a capacity of the internal memory for a plurality of media access control instances.
In some embodiments, the circuitry may be configured to take into account a plurality of parameters when determining the capacity of the internal memory.
In some embodiments, the parameters may include the number of logical channels, the data rate, the priority of the logical channels, the maximum bucket size of the logical channels, and the layer three buffer size of the logical channels.
In some embodiments, the circuitry may be configured to scale the size of each media access control instance based on a ratio of a maximum internal memory capacity and a total size of all media access control instances.
According to another aspect, a method for memory processing may include processing, by circuitry, a header of a packet and moving the header from an external memory configured to store layer three (L3) data to an internal memory configured to store layer two (L2) data. The method may also include processing, by the circuitry, a remaining portion of the packet upon determining that at least two predetermined conditions are satisfied. The method may also include transferring, by the circuitry, the remaining portion of the packet from the external memory to the internal memory.
In some embodiments, the method may further include receiving the packet and storing the packet in the external memory prior to processing the header.
In some embodiments, the method may further include passing the packet to a physical layer of the device for transmission.
In some embodiments, the internal memory may include a transmission window buffer and a retransmission window buffer.
In some embodiments, the method may further include moving the packet to the retransmission window buffer while delivering the packet from the transmission window buffer to the physical layer.
In some embodiments, the method may further include, while passing the packet from the transmission window buffer to the physical layer, feeding additional layer three data from the external memory into the internal memory.
In some embodiments, the processing of the remaining portion of the packet may include packet data convergence protocol processing including robust header compression, integrity checking, and encryption.
In some embodiments, the remaining portion of the packet may be further processed by adding radio link control and medium access control headers.
In some embodiments, the remaining portion of the packet may be placed in a contiguous memory area in the internal memory.
In some embodiments, the at least two predetermined conditions may include that space in the internal memory is available and that the medium access control is ready to prepare data for a next transmission window.
According to yet another aspect, a method for memory processing may include maintaining, by circuitry, layer three (L3) data according to at least one first window, wherein the L3 data is stored in an external memory. The method may also include maintaining, by the circuitry, layer two (L2) data according to at least one second window that is shorter than the first window, wherein the L2 data is stored in the internal memory.
In some embodiments, the at least one second window may comprise a transmission window and a retransmission window. The transmission window combined with the retransmission window may be smaller than the at least one first window.
In some embodiments, the method may further include determining a capacity of the internal memory for a plurality of media access control instances.
In some embodiments, the determined capacity may take into account a plurality of parameters.
In some embodiments, the parameters may include the number of logical channels, the data rate, the priority of the logical channels, the maximum bucket size of the logical channels, and the layer three buffer size of the logical channels.
In some embodiments, the method may further include scaling each media access control instance size based on a ratio of the maximum internal memory capacity and a total size of all media access control instances.
According to another aspect, a non-transitory computer-readable medium may encode instructions that, when executed by a microcontroller of a node, may perform a process for memory processing. The process may include any of the methods described above.
The foregoing description of the specific embodiments will so reveal the general nature of the disclosure that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments without undue experimentation, without departing from the general concept of the present disclosure. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
Embodiments of the present disclosure have been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. Boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
This summary and abstract sections can set forth one or more, but not all exemplary embodiments of the present disclosure as contemplated by the inventors and are, therefore, not intended to limit the present disclosure and the appended claims in any way.
Various functional blocks, modules, and steps have been disclosed above. The particular arrangements provided are illustrative rather than limiting. Accordingly, the functional blocks, modules, and steps may be reordered or combined in a manner different from the examples provided above. Similarly, some embodiments include only a subset of the functional blocks, modules, and steps, and allow for any such subset.
The breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims (20)

1. An apparatus for memory processing, comprising:
an external memory configured to store layer three (L3) data;
an internal memory configured to store layer two (L2) data; and
circuitry operably coupled to the external memory and the internal memory and configured to:
processing a header of a packet and moving the header from the external memory to the internal memory;
processing a remaining portion of the packet upon determining that space in the internal memory is available and a Media Access Control (MAC) layer is ready to prepare data for a next transmission window; and
transferring the remaining portion of the packet from the external memory to the internal memory.
2. The apparatus of claim 1, wherein the circuitry is further configured to: receiving the packet and storing the packet in the external memory before processing the header.
3. The apparatus of claim 1, further comprising:
a physical layer operably coupled to the circuitry, wherein the circuitry is further configured to pass the packet to the physical layer for transmission.
4. The apparatus of claim 1, wherein the internal memory comprises a transmission window buffer and a retransmission window buffer.
5. The apparatus of claim 4, wherein in communicating the packet from the transmission window buffer to the physical layer, the circuitry is further configured to move the packet to the retransmission window buffer.
6. The apparatus of claim 4, wherein in passing the packet from the transmission window buffer to the physical layer, the circuitry is further configured to load additional L3 data from the external memory into the internal memory.
7. The apparatus of claim 1, wherein to process the remaining portion of the packet, the circuitry is configured to apply packet data convergence protocol processing comprising robust header compression, integrity checking, and encryption.
8. The apparatus of claim 7, wherein to process the remaining portion of the packet, the circuitry is further configured to: adding a Radio Link Control (RLC) header and a MAC header to the remaining portion of the packet before passing the remaining portion of the packet to the internal memory.
9. The apparatus of claim 1, wherein to process the remaining portion of the packet, the circuitry is configured to place the remaining portion of the packet in a contiguous memory area in the internal memory.
10. The apparatus of claim 1, wherein the internal memory is configured to be accessed only by a baseband chip of the apparatus, and the external memory is configured to be accessed by a plurality of components of the apparatus in addition to the baseband chip.
11. The apparatus of claim 10, wherein the baseband chip comprises the circuit.
12. The apparatus of claim 1, wherein the circuitry is further configured to:
maintaining the L3 data according to at least one first window comprising a first plurality of packets; and
maintaining the L2 data according to at least one second window that is shorter than the first window, wherein the second window includes a second plurality of packets that is less than the first plurality of packets.
13. An apparatus for memory processing, comprising:
an external memory configured to store layer three (L3) data;
an internal memory configured to store layer two (L2) data; and
circuitry operably coupled to the external memory and the internal memory and configured to:
maintaining the L3 data according to at least one first window comprising a first plurality of packets; and
maintaining the L2 data according to at least one second window that is shorter than the first window, wherein the second window includes a second plurality of packets that is less than the first plurality of packets.
14. The apparatus of claim 13, wherein the at least one second window comprises a transmission window and a retransmission window, wherein a combination of the transmission window and the retransmission window is smaller than the at least one first window.
15. The apparatus of claim 13, wherein the circuitry is further configured to determine the capacity of the internal memory for a plurality of Media Access Control (MAC) instances based on a plurality of parameters.
16. The apparatus of claim 15, wherein the parameter comprises at least one of a number of logical channels, a data rate, a priority of logical channels, a maximum bucket size of logical channels, or an L3 buffer size of logical channels.
17. The apparatus of claim 15, wherein to determine the capacity of the internal memory, the circuitry is further configured to scale each MAC instance size based on a ratio of a maximum internal memory capacity and a total size of all MAC instances.
18. A method for memory processing, comprising:
processing, by circuitry, a header of the packet and moving the header from an external memory configured to store layer three (L3) data to an internal memory configured to store layer two (L2) data;
processing, by the circuitry, a remainder of the packet upon determining that space is available in the internal memory and a Media Access Control (MAC) layer is ready to prepare data for a next transmission window; and
transferring, by the circuitry, the remaining portion of the packet from the external memory to the internal memory.
19. A method for memory processing, comprising:
maintaining, by circuitry, layer three (L3) data according to at least one first window, wherein the L3 data is stored in an external memory; and
maintaining, by the circuitry, layer two (L2) data according to at least one second window shorter than the first window, wherein the L2 data is stored in an internal memory.
20. A non-transitory computer readable medium encoding instructions that, when executed by a microcontroller of a node, perform a process for memory processing, the process comprising the method of claim 18 or claim 19.
CN202080094295.7A 2020-01-28 2020-10-22 Dynamic uplink end-to-end data transmission scheme with optimized memory path Pending CN115066844A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US202062966686P 2020-01-28 2020-01-28
US62/966,686 2020-01-28
PCT/IB2020/059912 WO2021152369A1 (en) 2020-01-28 2020-10-22 Dynamic uplink end-to-end data transfer scheme with optimized memory path

Publications (1)

Publication Number Publication Date
CN115066844A true CN115066844A (en) 2022-09-16

Family

ID=77078077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080094295.7A Pending CN115066844A (en) 2020-01-28 2020-10-22 Dynamic uplink end-to-end data transmission scheme with optimized memory path

Country Status (2)

Country Link
CN (1) CN115066844A (en)
WO (1) WO2021152369A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024063785A1 (en) * 2022-09-23 2024-03-28 Zeku, Inc. Apparatus and method for logical channel prioritization (lcp) processing of high-density, high-priority small packets
WO2024092697A1 (en) * 2022-11-04 2024-05-10 华为技术有限公司 Communication method, apparatus and system
WO2024123357A1 (en) * 2022-12-09 2024-06-13 Zeku Technology (Shanghai) Corp., Ltd. Apparatus and method for robust header compression processing using a local customized shared memory
WO2024155269A1 (en) * 2023-01-16 2024-07-25 Zeku Technology (Shanghai) Corp., Ltd. Apparatus and method for using a physical layer subsystem to directly wakeup a downlink dataplane subsystem

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5450563A (en) * 1992-10-30 1995-09-12 International Business Machines Corporation Storage protection keys in two level cache system
US20060146831A1 (en) * 2005-01-04 2006-07-06 Motorola, Inc. Method and apparatus for modulating radio link control (RLC) ACK/NAK persistence to improve performance of data traffic
EP2187400A1 (en) * 2008-11-14 2010-05-19 Telefonaktiebolaget L M Ericsson (publ) Network access device with shared memory
WO2010064783A1 (en) * 2008-12-02 2010-06-10 Mth Inc Communication method and device in communication system and recording medium having recorded program for carrying out communication method
US20100274921A1 (en) * 2009-04-27 2010-10-28 Lerzer Juergen Technique for coordinated RLC and PDCP processing
US20100325393A1 (en) * 2009-04-27 2010-12-23 Lerzer Juergen Technique for performing layer 2 processing using a distributed memory architecture
US20110235635A1 (en) * 2010-03-26 2011-09-29 Verizon Patent And Licensing, Inc. Internet protocol multicast on passive optical networks
US20110280204A1 (en) * 2008-11-14 2011-11-17 Seyed-Hami Nourbakhsh Modular Radio Network Access Device
US20150245349A1 (en) * 2014-02-24 2015-08-27 Intel Corporation Enhancement to the buffer status report for coordinated uplink grant allocation in dual connectivity in an lte network
CN106471754A (en) * 2014-06-11 2017-03-01 康普技术有限责任公司 By the bit rate efficient transmission of distributing antenna system
US20180285254A1 (en) * 2017-04-04 2018-10-04 Hailo Technologies Ltd. System And Method Of Memory Access Of Multi-Dimensional Data
US20190342225A1 (en) * 2018-05-07 2019-11-07 Apple Inc. Methods and apparatus for early delivery of data link layer packets

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6792500B1 (en) * 1998-07-08 2004-09-14 Broadcom Corporation Apparatus and method for managing memory defects
US6707818B1 (en) * 1999-03-17 2004-03-16 Broadcom Corporation Network switch memory interface configuration
US7376755B2 (en) * 2002-06-11 2008-05-20 Pandya Ashish A TCP/IP processor and engine using RDMA

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5450563A (en) * 1992-10-30 1995-09-12 International Business Machines Corporation Storage protection keys in two level cache system
US20060146831A1 (en) * 2005-01-04 2006-07-06 Motorola, Inc. Method and apparatus for modulating radio link control (RLC) ACK/NAK persistence to improve performance of data traffic
EP2187400A1 (en) * 2008-11-14 2010-05-19 Telefonaktiebolaget L M Ericsson (publ) Network access device with shared memory
US20100180006A1 (en) * 2008-11-14 2010-07-15 Seyed-Hami Nourbakhsh Network Access Device with Shared Memory
US20110280204A1 (en) * 2008-11-14 2011-11-17 Seyed-Hami Nourbakhsh Modular Radio Network Access Device
WO2010064783A1 (en) * 2008-12-02 2010-06-10 Mth Inc Communication method and device in communication system and recording medium having recorded program for carrying out communication method
US20100325393A1 (en) * 2009-04-27 2010-12-23 Lerzer Juergen Technique for performing layer 2 processing using a distributed memory architecture
US20100274921A1 (en) * 2009-04-27 2010-10-28 Lerzer Juergen Technique for coordinated RLC and PDCP processing
US20110235635A1 (en) * 2010-03-26 2011-09-29 Verizon Patent And Licensing, Inc. Internet protocol multicast on passive optical networks
US20150245349A1 (en) * 2014-02-24 2015-08-27 Intel Corporation Enhancement to the buffer status report for coordinated uplink grant allocation in dual connectivity in an lte network
CN106471754A (en) * 2014-06-11 2017-03-01 康普技术有限责任公司 By the bit rate efficient transmission of distributing antenna system
US20180285254A1 (en) * 2017-04-04 2018-10-04 Hailo Technologies Ltd. System And Method Of Memory Access Of Multi-Dimensional Data
US20190342225A1 (en) * 2018-05-07 2019-11-07 Apple Inc. Methods and apparatus for early delivery of data link layer packets

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐广华;王良民;詹永照;: "EDA-MAC:事件驱动应用感知的无线传感器网络MAC协议", 小型微型计算机系统 *

Also Published As

Publication number Publication date
WO2021152369A1 (en) 2021-08-05

Similar Documents

Publication Publication Date Title
US8988994B2 (en) System and method for creating logical radio link control (RLC) and medium access control (MAC) protocol data units (PDUs) in mobile communication system
CN115066844A (en) Dynamic uplink end-to-end data transmission scheme with optimized memory path
EP2575321B1 (en) A radio receiving apparatus
CN102710389B (en) Method and user equipment for transmitting uplink data
JP2006311543A (en) Method and apparatus for polling transmission status in a wireless communication system
CN115066975B (en) Layer 2 downstream data on-line processing using integrated circuits
CN113472683A (en) Data discarding method and device, terminal and storage medium
US8589586B2 (en) Method and apparatus for managing transmission of TCP data segments
WO2017049647A1 (en) Data sending method, data receiving method and relevant device
US20220368494A1 (en) Uplink re-transmission with compact memory usage
WO2021212438A1 (en) Data transmission method, apparatus and system, terminal device, and storage medium
CN104168214A (en) Grouped data discarding method and device
US20230101531A1 (en) Uplink medium access control token scheduling for multiple-carrier packet data transmission
CN106209325A (en) A kind of TCP ACK message processing method and device
CN115176428A (en) Command and response descriptor handling in a software and hardware interworking system
JPWO2018189882A1 (en) Wireless communication device, wireless communication method, and wireless communication system
CN119946695A (en) A communication method and device
US20090257377A1 (en) Reducing buffer size for repeat transmission protocols
CN110856218B (en) Data encapsulation method and communication device
WO2021152363A2 (en) Layer 2 uplink data inline processing using integrated circuits
CN115066867A (en) Uplink data transmission scheduling
CN113507726B (en) Data transmission method and device in separated bearing mode and terminal equipment
CN111726865B (en) Medium access control service data unit processing and receiving method, equipment and device
CN113507725B (en) Data transmission method, device and terminal equipment in separate bearing mode
CN110611558B (en) Method and device for collecting mobile terminal information, collecting equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230721

Address after: Room 01, 8th floor, No.1 Lane 61, shengxia Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 200120

Applicant after: Zheku Technology (Shanghai) Co.,Ltd.

Address before: Room 260, 2479E Bay Shore Road, Palo Alto, California, USA

Applicant before: Zheku Technology Co.,Ltd.

TA01 Transfer of patent application right
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20220916

WD01 Invention patent application deemed withdrawn after publication