[go: up one dir, main page]

WO2009069072A1 - Système de mise en file d'attente à entrées multiples - Google Patents

Système de mise en file d'attente à entrées multiples Download PDF

Info

Publication number
WO2009069072A1
WO2009069072A1 PCT/IB2008/054936 IB2008054936W WO2009069072A1 WO 2009069072 A1 WO2009069072 A1 WO 2009069072A1 IB 2008054936 W IB2008054936 W IB 2008054936W WO 2009069072 A1 WO2009069072 A1 WO 2009069072A1
Authority
WO
WIPO (PCT)
Prior art keywords
input
memory
data
stream
streams
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IB2008/054936
Other languages
English (en)
Inventor
Huzaifa Najmi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NXP BV
Original Assignee
NXP BV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NXP BV filed Critical NXP BV
Publication of WO2009069072A1 publication Critical patent/WO2009069072A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/901Buffering arrangements using storage descriptor, e.g. read or write pointers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3036Shared queuing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports
    • H04L49/254Centralised controller, i.e. arbitration or scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3045Virtual queuing

Definitions

  • the invention relates to the field of computer and communication systems, and in particular to a system that receives multiple input-streams that are routed to a common output port.
  • Multiple-input, common-output systems are common in the art.
  • Multiple hosts may communicate data to a common server; multiple processors may access a common memory device; multiple data streams may be routed to a common transmission media; and so on.
  • the input to the multiple-input system is characterized by bursts of activities from one or more input-streams. During these bursts of activities, the arrival rate of input data generally exceeds the allowable departure rate of the data to a subsequent receiving system, and buffering must be provided to prevent a loss of data.
  • one of two types of systems are employed to manage the routing of multiple input-streams to a common output, dependent upon whether the design priority is maximum memory-utilization efficiency, or maximum performance.
  • a common buffer is provided for queuing the data from the input streams, and each process that is providing an input-stream controls access to this common buffer, in accordance with a given control protocol. Data is unloaded from this common buffer to provide the common output. Because a common buffer is used to receive the flow from the various input-streams, the size of the buffer can be optimized for a given aggregate arrival rate. That is, because it is extremely unlikely that all input-streams will be active contemporaneously, the common buffer is sized substantially smaller than the size required to accommodate maximum flow from all streams simultaneously. The performance of such an embodiment, however, is dependent upon the poorest performing process that is providing an input-stream, because a poor process can tie up the common buffer while all of the other processes await access to the common buffer.
  • a multiple-input queuing system of this type is disclosed in US 5,233,603.
  • the system contains a single buffer memory connected to multiple input and output lines.
  • a multiplexer and a de-multiplexer connect the input and output lines to the memory.
  • Different memory areas are provided for different inputs and outputs.
  • this patent discloses the use of different memory elements, each connected to a different, predetermined output line. The input lines access these memory elements via a shared bus.
  • a separate buffer memory is used for each combination of an input and an output line.
  • GB-A-2349296 discloses a network switch with a multiple-input, single- output buffering system. For each input a predetermined buffer memory is provided.
  • Each buffer 110' provides a queue for receiving data from its corresponding input-stream 101 '.
  • a receiving system (not shown in fig. 1) asserts an "Unload(n)" command to select the next-available data-item from the n th queue, and this selected data- item Q n is subsequently communicated to the receiving system.
  • the selection of the particular input data stream n is typically effected based on a prioritization scheme.
  • the system 100' typically includes a means for notifying the receiving system that data from an input-stream is available, and the receiving system selects from among the available streams based on a priority that is associated with the stream.
  • Alternative protocols for controlling the flow of data from a plurality of input-streams are commonly employed, including, for example, transmission control in the system 100' and a combination of transmission and reception control by the system 100' and the receiving system, respectively.
  • the selection of the particular input-stream may include any of a variety of schemes, including a first-in- first-out selection (FIFO), a round-robin selection, and so on, in addition to, or in lieu of the aforementioned priority scheme.
  • the design choices for a multiple-input system include a choice of the size D of the input queues. Based on the estimated input and output flow rates, a queue size D can be determined to minimize the likelihood of an overflow of the queue.
  • the queues associated each input-stream 101 ' of system 100' are illustrated as being similarly sized. If it is known that a particular input-stream has a flow rate that substantially differs from the other input-streams, it may be allocated a smaller or larger queue size.
  • the system 100' is configured to allow a maximum bursts of D data-items from any of the input-streams, based on the expected processing speed of the subsequent receiving system. Queuing theory techniques are common in the art for determining an optimal value of D, given an expected distribution of arrivals of data-items at any input-stream and an expected distribution of removals of the data-items by the subsequent receiving system.
  • each queue is sized to accommodate a worst-case estimate of arrivals.
  • EP 1 481 317 Bl a state-of-the-art multiple-input queuing system, which helps in reduction of the area consumed by memory devices is disclosed.
  • the system maintains a mapping of the memory locations of the buffer that is allocated to each data-item in each input-stream.
  • memory locations that are allocated to each input-stream are maintained in a sequential, first-in, first-out queue.
  • the multiple-input queuing system according to EP 1481317 Bl will be described in more detail below.
  • the multiple-input queuing system of EP 1481317 Bl is disadvantageous in that the usage of memory is not optimal. It is thus an object of the present invention to further improve the efficiency of the memory usage.
  • the system according to the invention provides a means of an efficient and high performance multiple input-single or multiple output system, which minimizes memory requirement.
  • the main advantage of the system according to the invention is thus a significant gain in efficiency of memory usage due to the application of circular buffers to implement the necessary queues required by the mapper.
  • Circular buffers as such are known from the prior art. Circular buffers have a single read and write pointer pointing to the next memory location in the buffer for read and write procedures, respectively. The pointers are modulo D incremented by one each time a read/write is performed in the D unit sized buffer.
  • a quantitative estimation of the memory saved by the system according to the invention when compared to the multiple-input queuing system known from EP 1 481 317 Bl will be made below in connection with the drawings.
  • Another aspect of the present application is a method of buffering data-items from a plurality of input-streams, including: receiving an allocation request from one or more input-streams of the plurality of input- streams in an allocator, allocating a selected memory-element of a plurality of memory-elements of a plurality of buffers to a selected input-stream of said input-streams, storing a received data-item from the selected input-stream to the selected memory-element in a buffer, storing an address of the selected memory-element corresponding to the selected input- stream in a queue of a mapper, wherein the queue is designed as a circular buffer, receiving an unload request that identifies the selected input-stream, and providing the received data-item from the selected memory-element, based on an identification of the selected memory- element corresponding to the selected input-stream.
  • a further aspect of the present application is a computer readable medium having a computer program stored thereon. The computer program comprises instructions operable to cause
  • the present application can be useful in the field of computer and communications systems and can in particular be applied to a system that receives multiple input-streams that are routed to a common output port. This may also be useful in implementing multimedia or any other algorithms using circular queues with multiple entities storing a finite set of data, which is in general similar.
  • Multiple hosts may communicate data to a common server; multiple processors may access a common memory device; multiple data streams may be routed to a common transmission media and so on.
  • Another example of application may be lookup tables (LUT) storing address pointers to different but similar sized buffers.
  • LUT lookup tables
  • FIG. 1 an example block diagram of a prior art multiple-input queuing system
  • Fig. 2 an example block diagram of an advanced prior art multiple-input queuing system
  • FIG. 3 an example block diagram of a multiple-input queuing system in accordance with this invention
  • Fig. 4 a detailed block diagram of the mapper in the system according to Fig. 3;
  • Fig. 5 an example allocation scheme for the allocator in the system according to fig. 3; and Fig. 6 - 9 table showing the calculated memory saving by the system according to the invention for different parameters D, P and I and
  • Fig. 10 - 29 diagrams showing the calculated memory saving by the system according to the invention for different parameters D, P and I.
  • the main advantage of the invention is a significant gain in efficiency of memory usage due to the application of circular buffers to implement the necessary queues required by the mapper.
  • Fig. 2 illustrates an example block diagram of a multiple-input queuing system 300' as described in EP 1 481 317 Bl.
  • the system 300' includes a dual-port memory 220', wherein writes to the memory 220' are controlled by an allocator 240' and reads from the memory 220' are controlled by a mapper 250'.
  • the write and read processes to and from the memory 220' are symbolically represented by switch 210' and switch 260', respectively.
  • the memory 220' includes P addressable memory- elements and each memory-element is of sufficient width W to contain a data-item from any of the input-streams 101 '.
  • the parameter P in system 300' is at least as large as parameter D in system 100' of Fig. 1.
  • the system 100' includes a total of N D memory-elements of width W
  • the memory 220' includes a total of P memory- elements of width W.
  • the allocator 240' is configured to provide the location of a currently unused memory-element within the memory 220', to which the next data-item from the input-streams 101 ' is directed. As indicated by the dashed lines between the input- streams 101 ' and the allocator 240', the allocator 240' is configured to receive a notification whenever an input- stream 101 ' has a new data-item to be transmitted. The allocator 240' is further configured to note the removal of data-items from the individual memory-elements. As each data-item is removed, the memory-element that had contained this data-item is now available for receiving new data-items, as a currently-unused memory-element.
  • An overflow of the memory 220' only occurs if all P memory-elements are filled with data-items that have not yet been removed.
  • the mapper 250' is configured to assure that data-items are unloaded/removed from the memory 220' in an appropriate order.
  • a receiving system (not shown) calls for data-items in a sequence that may differ from the sequence in which the data-items are received at the multiple-input queuing system 300'.
  • the system 300' may be configured to allow the receiving system to specify the input-stream M(n), from which the next data-item is to be sent.
  • a process at an input-stream M(n) may initiate a request to send m data-items to the receiving system, and the receiving system subsequently sends m "unload (n)" commands to the queuing system 300' to receive these m data-items, independent of the arrival of other data- items at system 300' from the other input- streams 101 '. That is, relative to each input-stream, the data-items are provided to the receiving system in sequence, but the receiving system may call for the data-items from selected input-streams independent of the order of arrival of data-items from other input- streams.
  • the allocator 240' communicates the allocation of each memory- element location p to each input-stream n as a stream-element pair (n, p), to the mapper 250'.
  • the mapper 250' thereby maintains a list of each memory-element location indicator pn that is sequentially assigned to each arriving data-item from each input-stream, n.
  • the mapper 250' When the receiving system requests the "next" data-item from a particular input-stream n the mapper 250' extracts the next location indicator pn from the list associated with the input-stream n and uses that location indicator pn to provide the contents of the memory-element p as the output Qn, via the switch 260'. This location indicator pn is removed from the list associated with the input-stream n, and the allocator 240' thereafter includes the memory-element p as a currently-unused memory location.
  • the mapper 250' includes multiple first- in- first-out (FIFO) queues 355', each queue 355' being associated with one corresponding input-stream 101 ' to the multiple-input queuing system 300'.
  • the allocator 240' allocates a memory-element p to an input-stream M(n)
  • the address of this memory-element p is stored in the queue corresponding to input-stream M(n)
  • the index n being used to select the queue 355' corresponding to input-stream M(n).
  • the address p at which the data-item is stored is stored in the queue corresponding to the input-stream in sequential order.
  • Each queue 355' in the example mapper 250' of Fig. 2 is illustrated as having a queue-length of D, consistent with the prior art queue lengths illustrated in Fig. 1. Note, however, that the width of the queues 110' of Fig. 1 is W, so that the total size of each queue 110' is D W. Because each queue 355' of Fig. 2 is configured to store an address to the P memory-elements, the total size of each queue 355' is D 1Og 2 P. Normally, the width of the address, 1Og 2 P is generally substantially less than the width of a data-item.
  • the queues 355' of Fig. 2 will be less than a third (10/32) of the size of the buffers 110' of Fig. 1.
  • a multiplexer 350' selects the queue corresponding to the selected input-stream M(n) and the next available index p n is removed from the selected queue 355'.
  • the index p n is used to select the corresponding memory- element p, via a multiplexer 260', to provide the output Qn corresponding to the "Unload(n)" request from the receiving system.
  • the allocator 240' marks the memory-element p as a currently unused memory- element, thereby allowing it to be allocated to newly arriving data-items, as required.
  • Fig. 2 Also illustrated in Fig. 2 is an example embodiment of a multiple-input, multiple-output, switch 210' that is configured to route a data-item from an input-stream 101 ' to a selected memory-element p in a memory 220'.
  • the example switch 210' includes a multiplexer 310' corresponding to each memory-element of the memory 220', that is enabled via a select (np) command from the allocator 240'.
  • each multiplexer 310' associated with each memory-element is configured to receive a select (np) command, wherein np identifies the select input-stream that has been allocated to the memory- element.
  • np select
  • the data-item from the n th input-stream is routed to the p th memory- element. This allows for the storage of data-items from multiple cotemporaneous input- streams 101 '.
  • the total memory required by the system illustrated in Fig. 1 is calculated as follows:
  • the amount of memory saved is also proportional to the system design parameters D and N.
  • the memory saved by this system is in the range of 45% up to 85% compared to the system illustrated in Fig. 2.
  • Fig. 3 and 4 show an example block diagram of a multiple-input queuing system in accordance with the invention.
  • the system according to the invention specifically modifies the mapper, the memory and the allocator of the prior system according to Fig. 2. Comparable components in both systems, like mapper, allocator, buffer and so on, hold the same reference signs.
  • the system comprises a memory unit 220 that includes at least one buffer B(b).
  • I is a design parameter of the system and can be determined based on the expected input-output flow of the system. Determining an adequate value for I is common in the art. This means that the system according to the invention requires a smaller number of queues (N/I) for N input-streams M(n) compared to the prior art system of Fig. 2 (N queues for N input- streams). Further, the system according to the invention uses I separate buffers B (b), each of size P/I, instead of a single buffer of size P as in case of the prior art system of Fig. 2.
  • Each memory element is of sufficient width of W bytes to store any data- item of one of the input-streams M (n).
  • the design according to the present invention has the disadvantage that any given buffer B (b) of size P/I in the memory unit 220 may be utilized by N/1 input-streams M (n) only.
  • the design mentioned in patent EP 1 481 317 Bl allows all N streams to utilize the entire buffer memory P.
  • the practical choice of I will be influenced by all the parameters, i.e. N, D, P and most important the input-output data flow rate.
  • the system according to the invention further comprises an allocator 240, which controls writes to the memory unit 220.
  • the allocator 240 is configured to allocate a memory element of one of the buffers B (O)-B (1-1) having an address A for storing a data-item from a selected input-stream M (n).
  • the allocator 240 is further configured to receive notifications whenever an input-stream M (n) has a data item to be transmitted as indicated by the dashed arrows in Fig. 3.
  • the arbitration logic of the allocator 240 may be designed such that the reception of further data- items of the selected input-stream M (n) is prevented until the data-item of the selected input- stream M (n) stored in one of plurality of buffers B (O)-B (1-1) is output by the system.
  • priority schemes may be implemented including dynamic prioritization based on the content of each data-item, or based on a prior history of transmissions from one or more of the input- streams M (n), and others.
  • a simple round-robin input selection scheme may be used, wherein the allocator 240 sequentially samples each input-stream M (n) for new data.
  • the allocator 240 is further configured to provide the address A of a currently unused memory element in a selected buffer B (b) in memory unit 220, to which the next data-item from the input-stream M (n) is directed.
  • the system presently comprises a multiple-input switch 210 configured to route the data-item from the selected input-stream M (n) to the selected memory-element in selected buffer B (b).
  • the multiple-input switch 210 comprises a plurality of multiplexers 310 coupled to the allocator 240, each multiplexer 310 being coupled to one specific memory element of the P memory elements in the buffers B (O)-B (1-1).
  • every buffer B (b) is designed to hold data corresponding to N/I input-streams.
  • the allocator 240 may use different schemes to determine the buffer B (b), to which the data item from a particular input-stream M (n) will be written. Few of these schemes are illustrated below and may or may not be dynamically changed.
  • the allocator may store M (0) -... M (1-1) date elements of input streams in buffer B (0).
  • B (b) contains streams from M (b*(N/I)) to M ([(b+1) *(N/I)] -1).
  • the allocator ensures that the sum of probability of occurrence of data elements of all input streams for any buffer B (b) is approximately the same. This scheme ensures an optimized usage of all buffers B (b), by allocating streams based on the probability of occurrence of data items.
  • the queues Q (q) are designed as circular buffers.
  • the circular buffer each may comprise a single read pointer R (q, b) and a single write pointer W (q, b), b and q as defined above.
  • the circular buffers are designed as I- dimensional circular buffers each having I read and I write pointers.
  • the dimension "I" indicates that up to I entities may simultaneously store data in a single location of the circular buffer.
  • the circular buffers of the system of the present invention each comprise D buffer elements, i.e. "D" is the size of each buffer.
  • the circular buffers are modulo D incremented by one each time a read/write is performed.
  • Data (f) is the actual data stored in the buffer and Owner (f) stores a kind of meta-data, indicating to which entities the data in the corresponding Data (f) belongs.
  • the size of Owner (f) is I bits wide, with 1 bit for every entity, which can contribute a value in Data (f).
  • a "1" for the b th bit in Owner (f) indicates that the corresponding Data (f) is owned by the b th entity. It is possible to have "0" as well as all "I” entities owning the data in the corresponding Data (f) indicated by the number of "I" bits in Owner (f).
  • each Data (f) in general is unrestricted and is based on the specific requirement. The greater the size of Data (f), the more memory will be saved using I- dimensional circular buffers. It should also be noted, that this scheme is particularly useful in case the Data (f) values are repeated by different entities. This is typically the case when Data (f) stores addresses of similar sized buffers. Presently, Data (f) holds the address A of a memory location of one or more buffers B (b) in the memory unit 220. Data (f) is thus log 2 (P/I) bits wide. Every bit of Owner (f) indicates all the buffers B (b), which have contributed to the value in the corresponding Data (f).
  • the corresponding width of the Owner (f) part is I bits
  • the overall width W of a buffer element is thus log 2 (P/I) + I bits, i.e. (log 2 (P/I) + I)/8 bytes.
  • Multiple buffers may write the same address An in Data (f) of Q (q) simultaneously updating the respective bit in Owner (f) of Q (q). Since the size of Data (f) is generally much greater than the size of Owner (f), a considerable amount of memory is saved.
  • the mapper 250 selects the queue Q (q) in accordance with the above- mentioned scheme used by the allocator 240.
  • Each queue element indicates all the buffers B (b), which have contributed the value A to Data (f), using one unique bit per buffer B (b) in Owner (f). This can be ensured by writing a '1 ' in the b th LSB of Owner (f) ("least significant bit" as commonly referred to in the art) for every buffer B (b) that has contributed to Data (f).
  • Each buffer B (b) holds data items received from N/I number of input streams.
  • Receiving a data item from input M(n) in the system according to the invention includes the following steps:
  • the next valid write location Data [W (b)] in Q (q) is determined with the help of the write pointer W (b).
  • W (b) always points to the next location in Q (q), whose b th bit in Owner [W (b)] is not set.
  • A Data [W (b)]): - Bth bit of Owner [W (b)] is set and W (b) is modulo D incremented by 1.
  • W (b) is modulo D incremented by 1 and step 1 is carried out again.
  • W (b) is modulo D incremented by 1 and step 1 is carried out again.
  • Outputting a data item from input-stream M (n) in the system according to the invention includes the following steps:
  • the receiver asserts a Unload (n) command to the mapper to indicate it would like to read the next data item corresponding to input stream M (n).
  • Step 1 is carried out again.
  • Step 1 is carried out again.
  • the main advantage of the system according to the invention is that it saves a significant amount of memory compared to the prior art system of Fig. 2. This will elucidated by the following memory calculations:
  • M3 (Memory required by memory 220) + (Memory required by Mapper 250 )
  • Fig. 6 to 9 and 10 to 29 illustrate by way of tables and charts how much memory is saved for different values for the design parameters N, D, P and I.
  • the memory saved ranges from a minimum of 45 % to as high as 85% for the parameters used.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

L'invention concerne un système qui fournit un moyen d'un système entrées multiples-sortie unique ou sorties multiples efficace et à haute performance, qui minimise les exigences de mémoire. Le principal avantage du système selon l'invention est donc un gain important d'efficacité d'utilisation de mémoire dû à l'application de tampons circulaires pour mettre en œuvre les files d'attente nécessaires requises par le dispositif de mappage.
PCT/IB2008/054936 2007-11-29 2008-11-25 Système de mise en file d'attente à entrées multiples Ceased WO2009069072A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP07121871.3 2007-11-29
EP07121871 2007-11-29

Publications (1)

Publication Number Publication Date
WO2009069072A1 true WO2009069072A1 (fr) 2009-06-04

Family

ID=40445489

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2008/054936 Ceased WO2009069072A1 (fr) 2007-11-29 2008-11-25 Système de mise en file d'attente à entrées multiples

Country Status (1)

Country Link
WO (1) WO2009069072A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050213595A1 (en) * 2004-03-23 2005-09-29 Takeshi Shimizu Limited cyclical redundancy checksum (CRC) modification to support cut-through routing
US6975637B1 (en) * 1999-01-27 2005-12-13 Broadcom Corporation Apparatus for ethernet PHY/MAC communication
US7215637B1 (en) * 2000-04-17 2007-05-08 Juniper Networks, Inc. Systems and methods for processing packets

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6975637B1 (en) * 1999-01-27 2005-12-13 Broadcom Corporation Apparatus for ethernet PHY/MAC communication
US7215637B1 (en) * 2000-04-17 2007-05-08 Juniper Networks, Inc. Systems and methods for processing packets
US20050213595A1 (en) * 2004-03-23 2005-09-29 Takeshi Shimizu Limited cyclical redundancy checksum (CRC) modification to support cut-through routing

Similar Documents

Publication Publication Date Title
US7295565B2 (en) System and method for sharing a resource among multiple queues
CN104821887B (zh) 通过具有不同延迟的存储器进行分组处理的设备和方法
US6922408B2 (en) Packet communication buffering with dynamic flow control
EP1421739B1 (fr) Traqnsmission de paquets de données multidiffusion
US6442162B1 (en) Credit-based scheme for high performance communication between devices in a packet-based communication system
EP1616415B1 (fr) Procede et dispositif pour memoire a blocs multiples partagee
US8880808B1 (en) Centralized memory allocation with write pointer drift correction
US9459829B2 (en) Low latency first-in-first-out (FIFO) buffer
US7327674B2 (en) Prefetching techniques for network interfaces
US6892285B1 (en) System and method for operating a packet buffer
US20030112818A1 (en) Deferred queuing in a buffered switch
US9769092B2 (en) Packet buffer comprising a data section and a data description section
US8223788B1 (en) Method and system for queuing descriptors
EP2526478A1 (fr) Tampon de paquets comprenant une section de données et une section de description de données
US6389493B1 (en) System and method for dynamically allocating bandwidth to a plurality of slave cards coupled to a bus
EP1481317A2 (fr) File d'attente partagee pour flux d'entree multiples
US8156265B2 (en) Data processor coupled to a sequencer circuit that provides efficient scalable queuing and method
CA2110134A1 (fr) Interface utilisant un processeur pour memoire a paquets intelligente
US7822051B1 (en) Method and system for transmitting packets
WO2007147441A1 (fr) Procédé et système de groupage d'interruptions à partir d'un moyen de stockage de données dépendantes du temps
WO2004107685A1 (fr) Procede et systeme de conservation de paquets en ordre partiel
WO2009069072A1 (fr) Système de mise en file d'attente à entrées multiples
US20060039284A1 (en) Method and apparatus for processing a complete burst of data
US7984210B2 (en) Method for transmitting a datum from a time-dependent data storage means
CN100396044C (zh) 动态缓存管理的atm交换装置及其交换方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08853632

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08853632

Country of ref document: EP

Kind code of ref document: A1