HK1064532A - System and method for real time adaptive capacity scheduling - Google Patents
System and method for real time adaptive capacity scheduling Download PDFInfo
- Publication number
- HK1064532A HK1064532A HK04107120.4A HK04107120A HK1064532A HK 1064532 A HK1064532 A HK 1064532A HK 04107120 A HK04107120 A HK 04107120A HK 1064532 A HK1064532 A HK 1064532A
- Authority
- HK
- Hong Kong
- Prior art keywords
- pdus
- available
- link portion
- stations
- pdu
- Prior art date
Links
Description
Cross reference to related applications
This application is related to U.S. patent applications No. 09/434832, 09/434815, 09/434816 and 09/434707, also commonly assigned and not yet approved, each entitled "SYSTEM AND METHOD FOR broadband millimeter WAVE DATA COMMUNICATION system and METHOD", the disclosure of which is incorporated herein by reference. Each of the above-mentioned applications is a division of commonly assigned U.S. patent No. 6016313 entitled "SYSTEM AND METHOD for wideband millimeter wave data COMMUNICATION system and METHOD" filed on month 1 and 18 of 2000, and each of the above-mentioned patent applications is currently under two-pass review in the name of application No. 90/005726 and application No. 90/005974.
This application relates TO AND is filed concurrently with commonly assigned patent applications, including U.S. patent application No. SYSTEM AND METHOD FOR DYNAMIC bandwidth ALLOCATION SYSTEM AND METHOD, U.S. patent application No. SYSTEM AND METHOD FOR DYNAMIC bandwidth ALLOCATION SYSTEM AND METHOD IN a POINT-TO-multipoint communication SYSTEM, U.S. patent application No. frame control application No. entitled "frame FRAME TIMING SYNCHRONIZATION SYSTEM AND METHOD", U.S. patent application No. manmade FOR AND METHOD of managing burst characteristics IN a TDM SYSTEM, AND U.S. patent application No. SYSTEM AND METHOD FOR controlling the state of a SYSTEM FOR demodulating burst management characteristics IN a TDM SYSTEM, the disclosure of the above application is hereby incorporated by reference.
This APPLICATION claims priority to a yet unapproved U.S. provisional APPLICATION No. 60/266475 entitled "software provisional APPLICATION," the disclosure of which is incorporated herein by reference.
Background
In a point-to-multipoint (PTMP) system comprising at least one central station and at least one remote station, allocation of available bandwidth between the stations is determined in accordance with an aggregate demand for available bandwidth by all remote stations in the system, irrespective of individual needs of respective users of said remote stations for the respective remote stations.
Point-to-multipoint (PTMP) systems, such as wireless communication systems, are often required to accommodate more channels. This requires more efficient allocation of the limited available bandwidth and more efficient scheduling of system resources and capacity. A wireless communication system facilitates two-way communication between a number of subscriber radio stations or subscriber units, referred to as a central station and a remote station. Examples of systems include mobile cellular telephone systems and Personal Communication Systems (PCS). The goal of many wireless communication systems is to provide the required communication channels between a central station and remote stations. In a wireless system using multiple access schemes, a time frame is a basic unit of transmission. Each frame is divided into a plurality of slots. Some time slots are used for control purposes and some time slots are used for information transmission. Information is typically transmitted in time slots of (time) frames in which time slots are specifically allocated. Transmissions from a central station to a remote station are commonly referred to as forward link transmissions. Transmissions from the remote station to the central station are generally referred to as reverse link transmissions.
Wireless communication systems typically use either Time Division Duplex (TDD) or Frequency Division Duplex (FDD) methods to facilitate the exchange of information between a central station and remote stations. Both TDD and FDD duplexing schemes are well known in the art. In an FDD system, duplex transmission between a central station and its remote stations is effected in the frequency domain. Different frequency groups are allocated for forward and reverse transmissions. In a TDD system, duplex transmissions between a central station and its remote stations are effected in the time domain. The channel is time divided into repeating time periods or time "slots" that are used for both forward and reverse transmissions.
The bandwidth requirements of the PTMP system vary as a function of time. For example, in a PTMP system that provides broadband services, the forward link and reverse link transmissions may have unequal or asymmetric bandwidth requirements. Likewise, the ratio of forward link/reverse link bandwidth required may also vary with the (communication) station and/or channel. In addition, the need for symmetric or asymmetric communication in a channel may also vary depending on the type of user/remote station. Therefore, there is a need for a system and method that can dynamically and adaptively allocate available bandwidth.
Furthermore, system resources/capacity should be efficiently scheduled to accommodate the increasing demand for channels. For example, many systems transmit information according to priorities assigned to the information/data. That is, the highest priority data will be scheduled for transmission before lower priority data. One disadvantage of this scheduling scheme is that if there are many highest priority data to be scheduled for transmission during a period of severe traffic congestion, lower priority data cannot be scheduled for transmission within the allowed time. Another approach is to schedule on a first-in-first-out (FIFO) basis. However, this may result in the highest priority data not being scheduled/transmitted within the allowed time. Another disadvantage of existing systems is that they tend to have an inherent tendency to favor forward link transmissions over reverse link transmissions. There is therefore a need for a system and method for dynamically and adaptively scheduling resources/capacity and allocating bandwidth for systems such as PTMP systems.
Drawings
The foregoing and other advantages and features of the invention will be better understood from the following detailed description of preferred embodiments of the invention, which is provided in connection with the accompanying drawings. The various parts of the drawings may not be to scale. Included in the drawing are the following figures:
fig. 1 shows a stylized illustration of a PTMP system;
FIG. 2 shows a graphical representation of time division formatting consistent with one embodiment of the present invention;
FIG. 3 shows a diagram illustrating the relationship between a central station, remote stations, and communication channels in accordance with one embodiment of the present invention;
FIG. 4 illustrates a functional block diagram of a real-time adaptive scheduling and bandwidth allocation system consistent with one embodiment of the present invention;
FIG. 5 shows a diagram illustrating centralized scheduling priority inversion;
FIG. 6 illustrates a functional block diagram of an exemplary cross-channel scheduler, consistent with one embodiment of the invention; and
FIG. 7 is a flowchart illustrating an exemplary process for scheduling capacity and allocating bandwidth, consistent with an embodiment of the present invention.
FIG. 8 illustrates a schematic diagram of an exemplary communication stream formation architecture.
Fig. 9 shows a schematic diagram illustrating a limited communication flow implementation of the communication flow forming architecture.
Fig. 10 shows a schematic diagram illustrating a bursty communication flow implementation of a communication flow forming architecture.
FIG. 11 shows a flow chart illustrating the different steps required to fetch an address from the fixed and dynamic bank FIFOs shown in FIG. 10.
Detailed Description
The capacity scheduling and bandwidth allocation system and method described herein utilizes centralized and distributed capacity scheduling to accomplish real-time adaptive capacity scheduling and bandwidth allocation in a wireless point-to-multipoint (PTMP) environment. Fig. 1 shows a stylized illustration of a PTMP system 100. The system 100 is comprised of at least one central station 24 and at least one remote station 22. A remote station 22 may include a Local Area Network (LAN), represented by local area networks 26 and 28, individual processors, represented by processor 30, a wireless communication device, represented by and assisted by antenna 32, an electromagnetic and/or optical communication device, represented by and assisted by medium 34, or any combination thereof. A central station 24 may include a processor represented by processor 36, a wireless transmission device represented by and assisted by antenna 38, an electromagnetic and/or optical communication device represented by and assisted by medium 34, or any combination thereof. In an exemplary embodiment, antenna 38 is formed as an omni-directional antenna to facilitate concurrent communication with all other stations. It will be appreciated that antenna 38 may be comprised of several directional antennas. As shown in system 100, communication may occur between a plurality of central stations 24, between a central station 24 and a remote station 22, between a plurality of remote stations 22, or any combination thereof.
Of particular note are Time Division Multiplexed (TDM) and Time Division Multiple Access (TDMA) systems in which communication between stations is accomplished by formatting information into unique time periods. Descriptions of TDM and TDMA systems may be found in any textbook relating to Wireless Communications, such as "Wireless Communications: tdmavers Cdma (wireless communication: TDMA v.s.cdma) ", by Savo g.glissical nd Petti a.leppanen, June 1997; and "wireless communications & Networks", by william standings, August 23, 2001; for example, FIG. 2 shows a graphical illustration of time division formatting consistent with the present invention. As shown in fig. 2, a communication channel is made up of a plurality of frames 42. Each frame 42 is divided into a forward link portion 44 and a reverse link portion 46. In the forward link 44, information is transmitted from a central station (e.g., central station 24) to at least one remote station (e.g., remote station 22). In the reverse link 46, information is transmitted from a remote station (e.g., remote station 22) to a corresponding central station (e.g., central station 24). Each of the forward link portion 44 and the reverse link portion 46 is divided into a plurality of time slots 48. According to the scheduling and bandwidth allocation system and method described herein, the length of each time slot 48 may be independently and dynamically reconfigured according to resource requirements, including bandwidth requirements, for a PTMP system (e.g., system 100).
Fig. 3 is a diagram illustrating the relationship between the central station 24, the remote stations 22, and the communication Channels (CPs). Communication between each remote station 22 and a central station 24 is accomplished through a communication Channel (CP). As shown in fig. 3 and described with reference to fig. 1, communication between the various stations may be accomplished via several different types of media, such as air links, vacuum, electrical conductors, optical conductors, or any combination of the above. According to the systems and methods described herein, multiple communication Channels (CPs) share and utilize a limited number of physical time slots typical of wireless TDM or TDMA architectures. These structures may be formed of various time division schemes such as Frequency Division Duplex (FDD), Time Division Duplex (TDD), or any combination thereof. A single CP is associated with communication between one central station 24 and each remote station 22 in a wireless PTMP environment (e.g., system 100). A single CP can simultaneously support multiple application sessions, such as multiple Internet Protocol (IP) packet flows, and Asynchronous Transfer Mode (ATM) virtual connections. The exemplary configuration shown in fig. 3 consists of an ATM application using 128 CPs. Each CP includes a plurality of forward and reverse time partitions 48. Each ATM access point includes N Virtual Connection (VC) terminals. It will be appreciated that the configuration shown in fig. 3 is exemplary only. For example, the number of CPs may be greater or less than 128, the number of VCs associated with each Multiplexer (MUX) may be different (i.e., not equal to N), and the illustrated applications may not be Asynchronous Transfer Mode (ATM), but are, for example, Internet Protocol (IP), priority services and recent IP quality of service (QOS) enhancements, Open Systems Interconnection (OSI), and multi-protocol label switching (MPLS).
The information to be transferred is divided into fixed-size data blocks, called Protocol Data Units (PDUs). Each PDU includes a payload portion and a pre-pending portion. The data contained in the payload portion represents information to be transmitted and/or received. The data contained in the prepended portion indicates the priority of the information and the time a PDU has been waiting for scheduled transmission. As will be explained in more detail herein, the data in the pre-pending portion is dynamically and adaptively updated to accomplish scheduling and bandwidth allocation. To accommodate all information to be communicated in a PTMP system, such as system 100, PDUs are scheduled based on priorities assigned to the information, the time a PDU has been waiting for transmission, and parameters associated with each CP. The information is assigned a priority, or priorities, according to a particular class of service. A class of service (COS) is a token that describes the service treatment and rights granted to a particular entity, such as a terminal or a CP. For example, the highest priority may be assigned to information having a "time sensitive" COS, while lower priorities may be assigned to information having a "non-time sensitive" or "best effort" service level.
Each CP supports multiple priority-based service Classes (COS), but the number is limited. Each CP is associated with parameters contained in a corresponding Traffic Management Contract (TMC). Therefore, according to the real-time dynamic scheduling apparatus described herein, each CP may have its own unique TMC, all CPs may use the same TMC, or any combination thereof. Each TMC consists of parameters that are used by the real-time dynamic scheduling system to allocate capacity resources and bandwidth, and to select PDUs to be transmitted. TMC can be updated in real time. Various parameters are included in each TMC, such as desired minimum and maximum symmetric transmission rates, maximum allowable delay delta for each class of service (COS), and drop delay threshold for each COS. In one embodiment of the invention, each TMC consists of the attributes listed in Table 1.
Table 1 communication channel traffic management convention (CP TMC)
| Parameter(s) | Description of the invention | Use of |
| MFSR | Minimum frame service rate | Indicating how often a communication channel must be polled and/or allocated its minimum capacity. This parameter is counted in airlink frames: if 1, then the channel is served every airlink frame; if it is 2, the letter… … are served once every 2 frames, and so on. The service interval is a minimum; the maximum value is per frame. |
| FPPR | Forward peak PDU rate | This is the agreed-upon peak PDU rate for the forward link, which is in the form of a # PDU per frame service rate (MFSR). Depending on the traffic load level, more or less capacity may be allocated for the communication channel in that direction, but this parameter defines the agreed (i.e. desired) peak rate. |
| RPPR | Inverse peak PDU rate | This is the agreed-upon peak PDU rate for the reverse link, which is in the form of a # PDU per frame service rate (MFSR). Depending on the traffic load level, more or less capacity may be allocated for the communication channel in that direction, but this parameter defines the agreed (i.e. desired) peak rate. |
| FMPR | Forward minimum PDU Rate | This is the agreed lowest PDU rate for the forward link, in the form of a # PDU per frame service rate (MFSR). Depending on the traffic load, more capacity may be allocated for the communication channel in that direction, but this parameter defines the agreed (i.e. desired) minimum rate. If the parameter is zero (0), then no minimum capacity is allocated to the forward link, and PDU scheduling will only occur when an asynchronous traffic stream reaches the forward link. |
| RMPR | Reverse minimum PDU Rate | This is the agreed lowest PDU rate for the reverse link, in the form of # PDU per TDD service rate (MFSR). Depending on traffic load, this direction may beThe communication channel of (a) allocates more capacity but the parameter defines the agreed (i.e. desired) minimum rate. If the parameter is zero (0), the remote is polled for its queue capacity, but will not be allocated PDUs unless a certain amount has accumulated in its transmit queue. |
| P1-MDI | Priority 1 maximum delay delta | This parameter specifies the maximum delay increment that can be allowed in the PDU flow for the priority 1 service class of the channel. The increment is a real number expressed in the form of a tenth of the frame time (e.g., 0.1 or 1.5, etc.). This parameter is versatile and is used in the prioritization of scheduling, as explained later herein. |
| P2-MDI | Priority 2 max delay delta | This parameter specifies the maximum delay increment that can be allowed in the PDU flow for the priority 2 service class of the channel. The increment is a real number expressed in the form of a tenth of the frame time (e.g., 0.1 or 1.5, etc.). This parameter is versatile and is used in the prioritization of scheduling, as explained later herein. |
The CP COS and TMC system and method described above can be easily applied to various wireless communication systems. For example, various ATM-based quality of service (QOS) offers may be grouped and/or mapped onto COS based on CP priorities, and the TMC of each CP may be updated in real time to reflect the sum of the minimum and maximum ATM Virtual Connection (VC) rates it supports. Furthermore, since TMC can be updated in real time, the transport of ATM Permanent Virtual Connections (PVCs) and Switched Virtual Connections (SVCs) can both be supported.
Referring to fig. 4, a schematic block diagram of a real-time adaptive scheduling and bandwidth allocation system is shown. Capacity scheduling and bandwidth allocation utilize inter-channel scheduling and intra-channel scheduling. Inter-channel scheduling helps to determine the allocation of available capacity between multiple competing communication Channels (CPs). Intra-channel scheduling helps to decide the transmission of PDUs within a single CP direction environment.
The example block diagram shown in fig. 4 illustrates the case of scheduling for two CPs. The inter-channel scheduler 52 distributes bulk capacity grants to the forward channel inter-channel scheduler 54(a) and the reverse channel inter-channel scheduler 54 (b). The bulk grant includes information indicating the exact number of PDUs that can be transmitted in each direction within a given transmission frame (also referred to herein as an airlink frame). This information is distributed to the remote stations 22 via air link control channel signaling. In one exemplary embodiment, the information is distributed to the remote station 22 one airlink frame time prior to actual implementation.
During the forward portion of the transmission frame, each active instance of forward inter-channel scheduling uses the bulk grant provided by the inter-channel scheduler 52 to make explicit PDU selection among the PDUs in its COS queuing mechanism. The inter-channel scheduling uses an immediate priority calculation for explicit PDU selection followed by a generation of a capacity report detailing the number of PDUs remaining queued in a given direction within the CP. The capacity report is provided as input to the channel's bid processor 58.
During the reverse portion of the transmission frame, each active instance of reverse inter-channel scheduling uses the bulk grant provided by the inter-channel scheduler 52 to allow explicit PDU selection among the PDUs in its COS queuing mechanism. The inter-channel scheduling uses an immediate priority calculation for explicit PDU selection followed by a generation of a capacity report detailing the number of PDUs remaining queued in a given direction within the CP. The capacity report is provided as input to the channel's bid processor 58.
The channel's bid processor 58 utilizes the capacity report provided by the intra-channel scheduler 54 to determine the PDUs that have arrived most recently and that need to be considered by the inter-channel scheduler 52. These newly arrived PDUs are submitted to the inter-channel scheduler 52. The inter-channel scheduler 52 then re-distributes the next bulk grant with data associated with the newly arrived PDU, thereby ensuring a system-wide closed-loop scheduling mechanism.
For each wireless CP, two instances or intra-channel scheduling are implemented. The first instance corresponds to the forward direction (central station 24 to remote station 22) and the other instance corresponds to the reverse direction (remote station 22 to central station 24). The distributed intra-channel scheduler 54 can prevent premature priority inversion and incremental delays of Protocol Data Units (PDUs) due to the execution time of the centralized scheduling. For example, consider the typical case shown in FIG. 5. As shown in fig. 5, a priority 1 PDU arrives at the remote station 22 after a priority 2 PDU has been selected and granted capacity. When the remote station 22 receives a grant of a (priority) 1 PDU, it faces scheduling challenges. It now has two PDUs to transmit and where one PDU that has been granted capacity has a lower priority and is less sensitive to delay than a later arriving PDU with a higher priority of 1.
An adaptive real-time scheduler as described herein helps prevent premature priority inversion and mitigates the negative effects of incremental delays in PDU transmission due to scheduling latency. The remote station 22 has the ability to make a final decision as to which PDU to send based on the bulk grant. Thus, the priority 1 PDU in fig. 5 may be transmitted before the priority 2 PDU. Accordingly, the hub (24) based inter-channel scheduler 52 will also adjust its real-time scheduling to accommodate the distributed decisions made by the intra-channel scheduler 54.
The in-channel scheduler 54 is a distributed dynamic algorithm based on priority. The intra-channel scheduler 54 schedules the PDUs with priority while allowing controlled priority inversion when indicated by the parameters of the channel TMC. This property of the in-channel scheduler is referred to as Dynamic Delay Weighted Prioritization (DDWP). The inter-channel scheduler 54 performs priority-based scheduling according to the delay sensitivity of the associated PDUs. The DDWP (in-channel) scheduler 54 has the ability to decide which PDUs are to be transmitted based on the bulk grants from the associated inter-channel scheduler 52.
According to DDWP, the in-channel scheduler 54 determines and assigns an immediate priority to the PDUs at the front of the transmission queue for each class of service. The mechanism uses a time dependent instantaneous priority index which is then used in explicit PDU selection. The instantaneous priority of each PDU is determined according to the following equation:
Pj(t)=MDIj-ωj(t) (1)
wherein t represents time; pj(t) represents an instantaneous priority corresponding to the jth class of service as a function of time; MDI (Diphenyl-methane-diisocyanate)jRepresents a maximum allowable delay increment of the communication channel corresponding to the jth service class; and omegaj(t) represents a value indicating the maximum time that the j-th class of service PDU has been waiting for allocation. This instantaneous priority index Pj(t) is used to decide explicit inter-level PDU selection. Thus, if the delay in a lower priority PDU stream of a class "j" has increased beyond the maximum allowable delay MDI for the channel corresponding to that class of servicejAnd there is a degree of freedom in the higher priority PDU stream 'j-1' (i.e. after priority inversion, rank 'j-1' does not exceed its MDIj-1) The DDWP instructs lower priority PDUs to be chosen before higher priority PDUs. It will be appreciated that a real-time adaptive scheduler and bandwidth allocation system and method described herein may be applied to systems including any number of classes of service.
Once the intra-channel scheduler 52 decides which explicit PDUs it will use to satisfy its bulk grants, the CP is responsible for updating the inter-channel scheduling function with respect to any newly arriving PDU payload that requires scheduling capacity. To accomplish this, the CP capacity bid processor 58 residing at the central site maintains a history of channel PDU capacities for each class of service. This history includes the count of PDUs that were kept queued in the CP the last time the CP reported its queue capacity. These counts also include the PDU count granted by the inter-channel scheduler 52 since the last capacity update. Thus, using the actual capacity reported by the intra-channel scheduling instance, the CP's capacity bid function calculates any newly arrived loads and reports them to the inter-channel scheduler 52. The newly arrived PDU for each class of service is calculated as follows: p1NEW=P1ACTUAL+P1GRANT-P1PREV,P2NEW=P2ACTUAL+P2GRANT-P2PREVEtc. (P1 for priority 1, P2 for priority 2, and so on). By assuming the number of PDUs permitted to be transmitted by the in-channel scheduling instance, adjustments are made continuously based on the capacity bidding function of the hub to accommodate the actual reported number of queues of the in-channel scheduling function. By doing so, it can synchronize the centralized inter-channel scheduling function with the distributed intra-channel decision making procedure. Pn after providing the inter-channel scheduling function with the count of the latest arriving PDUsPREVTo form PnACTUAL,PnGRANTTo form PnNEWThe channel's bid function is thus ready for the next capacity report.
A real-time adaptive scheduling and bandwidth allocation system and method described herein includes one or more independent inter-channel scheduling instances. In one embodiment, these instances are located on the central station 24. These inter-channel scheduling points may be one per carrier, one per carrier group, one per hub, or there may be one instance for each hub group. The inter-channel scheduler 52 decides the asymmetric dispersion allowance of each CP under its supervision and is responsible for maintaining fairness among the competing CPs. In any of the configurations described above, there is a one-to-many relationship (e.g., a 1: N relationship) between inter-channel scheduling and intra-channel scheduling, as shown in FIG. 6.
Fig. 6 shows a functional block diagram of an example inter-channel scheduler. The channel content volume request is submitted to the inter-channel scheduler 52 in the form of a PDU payload. Each PDU submitted for scheduling is then assigned a priority index and queued in a suitable class of service (COS) schedule by priority. A priority index ordering key is determined by the inter-channel Windowed Fair Priority Queue (WFPQ) processor 62, which is used to determine CP and asymmetric fairness in each COS. WFPQ will be described in detail below. The inter-channel scheduler 52 uses a simple fixed priority processing scheme and converts the queued PDU table into CP dispersion grants at each scheduling interval. As shown in FIG. 6, in each scheduling interval, the priority 1 table is processed until it becomes empty, followed by the priority 2 table, followed by the priority 3 table, and so on until either the scheduling queue is empty or the symbol and/or time budget for the scheduling interval expires. The process is repeated at each scheduling interval.
The inter-channel scheduler 52 uses the properties (parameters) of TMC in combination with other real-time metrics to determine fairness. Scheduling of inter-channel capacity is accomplished by considering capacity allocation (i.e., priority) in the area of CPCOS advantage shown in fig. 6, in combination with delay sensitivity. However, in this delay-sensitive prioritization, inter-channel scheduling uses a time-of-arrival windowing scheme instead of the pure FIFO (first-in-first-out) time-stamping method to achieve asymmetric fairness. This scheme treats all capacity requests occurring within a specified scheduling interval as arriving at the same point in time, regardless of the direction from which the request came, and the scheduling interval is referred to as the arrival time domain window (ATTDW). In addition to the windowed time stamps, the total amount of capacity in each direction is also taken into account and used as part of the queuing algorithm weighting. Thus, when the priority classification and arrival time are the same, the capacity grant may take care of a predetermined delay and asymmetry inherent in the transmission queue capacity, which is a future forecast value of capacity usage. This improves scheduling efficiency by reacting early to the asymmetry flag.
Other factors are also considered in inter-channel scheduling to ensure a fair result in a mixed delay-sensitive data stream. These factors include asymmetric minimum and maximum PDU loading rates of the CP. For example, if one CP peak PDU rate is 2 times higher than the other CP peak rate, fairness requires that this be considered in the conditions of inter-channel scheduling. In addition, each CP COS also has a maximum delay delta parameter in its TMC. CP with lower maximum delay delta properties will be weighted more advantageously when all other conditions are the same (e.g. priority classification, arrival time, queue capacity and peak rate). Since these different weighting parameters relate to priority, delay sensitivity, and fairness, the inter-channel scheduler 52 is said to use Windowed Fairness Priority Queuing (WFPQ).
Windowed Fair Priority Queuing (WFPQ) is used to schedule transmission of PDUs at the inter-channel level. It decides capacity allocation across multiple communication Channels (CPs) and it provides fair scheduling within a priority-based framework. In a purely fair sharing scheduling scheme, a share-based capacity allocation for lower priority, high capacity remote stations may disrupt the delay constraints of higher priority PDU flows. This may occur in a fair sharing scheduling scheme because the channel with the largest shared capacity also encounters the lowest latency. A purely fixed priority scheduling scheme may be able to accommodate the latency limitation of high priority PDU flows, but congestion in higher priority PDU flows may result in periods of low priority PDU flow capacity starvation. Windowed fairness priority queuing provides a combination of a fair sharing scheme and a priority based scheme.
A Windowed Fairness Priority Queuing (WFPQ) technique is used to allocate scheduling priorities in multiple CP environments across multiple classes of service. In such a mode of operation, some delay may be introduced into higher priority data flows when congestion occurs to allow for a fair shared allocation to other classes of service. The WFPQ technique proposed here is used independently for each class of service at the inter-channel level. Therefore, the WFPQ technique is used to independently allocate scheduling priorities in the context of each service class.
The inter-channel fairness in windowed fairness priority queuing described herein uses a Time Domain Window (TDW) in which capacity requests are all given an equal time of arrival classification (AT)TDW). The time of arrival is a function of the time domain window. For example, in a TDD or FDD framing structure using a framing period as a time domain window, the ATTDWIncremented for each framing period. In the case where all other parameters are the same (e.g., same COS, peak rate, etc.), PDUs arriving earlier in time are prioritized and thus scheduled before PDUs arriving later in time. Asymmetric scheduling fairness can be maintained by assigning the same arrival time stamp to forward and reverse capacity requests in a single air link frame.
The current class of service queue capacity of the communication channel is also used in the scheduling prioritization procedure. This is because the queue size is a finite resource, and all other conditions are the same (e.g., same COS, AT)TDWEtc.), channels with higher queue pressure will get the appropriate priority. In addition, upcoming asymmetric direction indications can be predicted from the queue capacity of the class of service, which can be used to provide more efficient scheduling.
The scheduling priority indication is a time domain window (P)TDW) According to a function of the priority in (1), which conforms to the following equation:
PTDW=μj+ATTDW+I/PDUEst_Class (2)
wherein, PTDWIs a scheduling priority indication as a function of time domain window, TDW is the time domain window, μjIs the priority of the jth class of service, ATTDWIs the time of arrival classification, which is a function of the time domain window, and I/PDUEst_ClassThen is the inverse of the current estimated capacity of the particular class of service queue in the available communication channel. PTDWThe smaller the result value of (c), the higher the priority.
Forward and reverse capacity requests are processed and scheduled in accordance with priority within a given time domain window as a function of arrival time, class of service and queue capacity. As shown in equation (2), the asymmetry will favor the reverse link direction as the load in the reverse link direction increases. This mechanism provides an efficient forward predictive tracking in asymmetric capacity allocation.
During congestion, PDUs may accumulate in the airlink's distributed queuing mechanism, causing capacity requests that may require a large amount of PDU capacity to be available. If individual PDUs in the payload are not prioritized and are scheduled in a distribution proportional to their estimated arrival rate, a large number (cluster) of lower-rate payloads can disrupt the delay and peak-rate requirements of a higher-rate communication channel. For example, the peak PDU rate for one first communication channel may be 5 PDUs per air link frame, the peak PDU rate for one second communication channel may be 10 PDUs per air link frame, and each communication channel requests 10 PDUs in the same Time Domain Window (TDW) and class of service. In addition, some data packets that are present only in the remote station may be served at a plurality of air link framing rates that are related to the TMC minimum frame service rate parameter (MFSR) of the communication channel. In addition, each communication channel has a maximum delay delta (MDI) assigned to it for each class of service. These parameters are real numbers, which are functions of the TDW. The resulting PDU scheduling priority indication is therefore also a function of the peak PDU rate and maximum delay parameter of the communication channel.
A single PDU table is queued for each PDU in the requested capacity (e.g., if a reverse link capacity request requests 10 newly arriving asynchronous PDUs, then there are 10 PDU tables queued). A PDU table including a scheduling priority indication (P)PDU) This indication is equal to the real number defined by the following equation.
PPDU=μj+ATTDW+I/PDUEst_Class+(ATAdj-1)/Rpath-peak+Max-DelayClass (3)
Wherein, muj、ATTDWAnd I/PDUEst_ClassAs defined above, ATAdjIs the adjustment of the arrival time of each PDU, Max-DelayClassIs the maximum allowable delay delta, R, for the communication channel corresponding to the particular class of service (e.g., P2-MDI)path peakThen the PDU peak rate (e.g., R) per airlink frame corresponding to the appropriate directionpath-peak=FPPR/MFSR)。
ATAdjIs the adjustment of the arrival time of each PDU. It is used to assign scheduling priorities among the PDUs as a function of the channel peak PDU rate, which is directly proportional to its worst case arrival rate. The PDU payload is scheduled at or better than the resulting arrival rate. Their delays between arrival, scheduling and transmission are in part a function of the inherent airlink delay, excess reservations and current congestion in the wireless transmission system. In addition, scheduling assignments between different communication channels at peak rates can introduce a round robin-like fairness property, which is an inherent property of inter-hub inter-channel PDU scheduling.
For each PDU within a capacity request, the ATAdjIt increases from 1 to the total number of requested new PDUs. ATAdjIs divided by the channel's peak PDU rate to obtain a particular PDU arrival time adjustment. For example, if the channel peak rate is 10 PDUs per air link frame and a capacity request of 10 PDUs is received, the final arrival time adjustment should be calculated as follows: 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 and 0.9, respectively. Such a distribution is arranged in the same Time Domain Window (TDW). (such adjustment would not result in an effective increase in ATTDWValue of (d).
However, if the peak rate of the channel is 5 PDUs per airlink frame and a capacity request of 10 PDUs is received, then the final arrival time adjustmentThe following were used: 0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4 and 1.8, respectively. This capacity request (exceeding the peak PDU rate) is another indicator of congestion and, in this particular example, it results in a distribution of PDU scheduling across the channel peak rate for two (2) airlink frames. (this adjustment results in an effective increase in ATTDWThe value of (c). ) This adjustment takes into account and compensates for remote stations having a Minimum Frame Service Rate (MFSR), which is a number of airlink frames (e.g., half-rate or quarter-rate remote stations), to adjust their scheduling accordingly.
Max-DelayClassThe parameter is an inter-channel scheduling adjustment increment that is temporarily present in TMC of each communication channel. Which is a parameter unique to each service level. Thus, each class of service may have its own maximum delay delta in the communication channel (e.g., P1-MDI, P2-MDI, etc.). Use of Windowed Fairness Priority Queuing (WFPQ) counts in it can provide temporary scheduling differences between competing communication channels. Thus, a communication channel with a lower maximum delay value for a particular class of service is arranged before a communication channel with a higher maximum delay tolerance (for the same class of service), even if all other conditions are the same.
In each scheduling period or time domain window, a set of PDU tables is built and added to the appropriate class of service table queue. The fairness priority queue is also queued by the hub's inter-channel scheduler 52 at P during each scheduling periodPDUThe priority order of processing. During the TDW of a TDD air link frame, the forward link transmission scheduler first updates their new PDU bid request, and then the reverse link transmission scheduler updates their new PDU bid request. But since the PDU table queue is P by the inter-channel schedulerPDUSequentially processed, and thus the PDU table queue, represents an ordered list that inherently defines the asymmetric symbol budget attribute for the next TDD frame.
Fig. 7 shows a flow diagram of an example process for scheduling and allocating bandwidth. A PDU is formed at step 70. The information to be transmitted is divided into fixed size PDUs. Each PDU includes a payload portion and a pre-pending portion. The data contained in the payload portion represents information to be transmitted and/or received. The data contained in the prepended portion indicates the priority of the information and the time a PDU has been waiting for scheduling. The inter-channel scheduler distributes the PDU bulk grants to the various communication channels in step 72 based on the aggregate demand for resources and bandwidth set forth by all remote stations 22, as previously described herein. These PDU size grants are received by the in-channel scheduler in step 74 and the PDUs are scheduled for transmission in each communication channel in step 76 according to the description herein relating to equation (1) (2) and associated text. In step 78, the capacity count for each communication channel/class of service is updated and provided to the inter-channel scheduler for subsequent scheduling.
The present invention may be embodied in the form of programs that are executable by a computer and devices for practicing those programs. The present invention can also be embodied in the form of computer program code that can be stored in tangible media, such as floppy diskettes, Read Only Memories (ROMs), CD-ROMs, hard drives, high density disk, or any other computer-readable storage medium, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. The present invention can also be embodied in the form of computer program code, for example, whether stored in a storage medium, loaded into and/or executed by a computer, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. When implemented on a general-purpose processor, the computer program code segments configure the processor to create specific logic circuits.
The architecture and method of airlink traffic formation to eliminate the TDD effect will be described in detail below. Although suitable for the above-described method of dynamic bandwidth allocation, the process of traffic shaping is not dependent on any particular scheduling algorithm. This separation between shaping and scheduling provides some flexibility for the target application, air link bandwidth, and payload type. One preferred embodiment discussed below describes an implementation specifically for forming ATM traffic in a TDD system. However, this architecture can be used for any number of data transmission systems and is not limited to ATM communication in TDD systems.
Attention is now directed to FIG. 8, wherein a logical view of a communication stream formation architecture is shown. In particular, an improvement to "store-and-forward" systems, which are well known in the art, is shown. Protocol Data Units (PDUs) 801 come from network interfaces and are labeled with channel and priority indications 802 by an upstream router or switch. The ingress demultiplexer 803 records the arrival time 804 of the PDU 801. The ingress demultiplexer then decodes the channel and priority indicator and selects one of k FIFOs 805 to temporarily store the PDU 801, where k is the product of n channels and m priorities per channel (k ═ n × m). As shown in fig. 8, there are n channels (channel 1, channel 2, … … channel n) and m priorities (P1, P2, … … Pm). When a new PDU is added, the ingress demultiplexer 803 updates the current PDU count 806 for each of the k FIFOs 805. Scheduling processor 807 examines the counts and time each of the k FIFOs 805, thereby forming local bid requests 808 from each of the k FIFOs. As described above, local bid request 808 is considered along with remote bid request 809 to decide how to allocate bandwidth. The scheduling processor expresses the bandwidth allocated for the PDUs to be transmitted from the local system in the form of an egress sequence. The egress sequence 810 indicates how many PDUs the egress multiplexer 811 fetches from some or all of the k FIFOs 805 and in what order they are fetched. The egress multiplexer then sends the retrieved PDUs to the air interface 812 in the correct order.
For large systems that are limited in cost and applicability, the architecture shown in FIG. 8 requires a new approach to implementation. Any commercially-practical implementation is forced to use only a limited amount of memory, limited processing power, and must be able to complete its tasks in ever-decreasing amounts of time as data rates increase. In view of the current capabilities of programmable processors, memories, and programmable logic, the present systems and methods segment the architecture to maximize the capabilities of various technologies. The two embodiments described below, fig. 9 and 10, are suitable for handling mixed situations of variable and fixed bandwidth requirements. One of the embodiments-fig. 9 is suitable for limited or well-characterized communication flow patterns. Another embodiment-fig. 10 is adapted to handle highly variable or "bursty" communication flow patterns. Both of these embodiments share a number of common structural elements, but differ greatly in how they each address the traffic flow management problem.
Referring now to fig. 9, a limited communication flow implementation of the communication flow shaping architecture is depicted. In a preferred embodiment, an ATM communication data stream, such as PDU901 with channel and priority indications 902, is submitted over a UTOPIA bus, performs shaping functions, and sends traffic and control information to the airlink physical layer. It should be appreciated that the system and method of the present invention is capable of operating on any packetized communication data stream submitted by a prior art bus structure. The structural elements of this embodiment shown in fig. 9 are mapped to like numbered elements shown in fig. 8 (e.g., PDU901 in fig. 9 corresponds to PDU 801 in fig. 8). In some cases, one structural element in fig. 8 is split into a plurality of structural elements shown in fig. 9.
To implement a commercially viable solution, a combination programmable logic, synchronous DRAM, and dual channel SRAM are used to implement the k logic FIFOs. One preferred embodiment uses 384 logical FIFOs. The FIFO function corresponding to the FIFO 805 in fig. 8 is split to a PDU memory RAM 905A, a circular queue, and head and tail pointer lists 905B and 905C of the queue. This split places mass storage in the low cost SDRAM and only FIFO operation in the higher cost DPRAM. With this structure, PDUs from each channel/priority can be safely inserted into the memory RAM 905A, while pointers to memory locations are separated and stored in the k queues in sequence. In the context of limited traffic flows, the size of the queues (and thus the size of the DPRAMs) may be limited to allow variability in individual channels/priorities while also ensuring the availability of storage space for traffic flows that comply with agreed requirements. The PDU may be safely inserted by selecting a memory address from a (logical) FIFO pool of available memory addresses 903C. By using a global bank as a storage address, no overhead is added to split the mass storage space as required by the prior art.
The store and forward process will be described below. In a preferred embodiment, PDU901 is an ATM cell that includes a prepended priority indication and a temporary remote channel ID ("TRPI"), as shown in item 902 of fig. 9. The ingress demultiplexer 903A decodes the pre-pending priority index and TRPI to decide which of the k logical FIFOs the PDU should be sent to. Record store 903B performs a sub-function of demultiplexer 803 of FIG. 8, which checks queue control table 905B to determine if there is any available space in the selected queue. If the head of the queue is not equal to the tail of the queue, this indicates that there is available space. If space is available, a memory address is fetched from the buffer set FIFO 903C. The memory address is written to the head of the queue along with a time stamp 904 to indicate the time of arrival of the PDU 901. The payload portion of PDU901 is stored into memory RAM 905A and PDU count 906A is updated as the head-of-line and tail-of-line modulo difference. The scheduling processor 907 checks the queue control table 906A for the valid count and time stamp of each queue and thus forms a local bid request. The local bid request is considered along with the remote bid request to decide how to allocate bandwidth 908. The scheduling processor represents the bandwidth allocation of PDUs transmitted from the local system in the form of an outgoing (egress) sequence table 910. The issue sequencer 911A directs the discard/retrieve multiplexer 911B from which memory address to retrieve a PDU. When each PDU is retrieved, its memory address is returned to the buffer pool 903C.
The limited communication flow implementation shown in fig. 9 and described above allows the size of the special purpose and expensive DPRAM to be minimized. But when traffic flow is not so well-behaved-highly non-uniformly in time, a fixed queue size is not a reasonable solution because the fixed queue must be sized in the worst case. An alternative solution suitable for non-good performing traffic flow is presented below.
Attention is now directed to fig. 10, wherein like numerals are associated with like elements and/or functions in fig. 8 and 9, and fig. 10 illustrates a "bursty" communication flow implementation of a communication flow forming architecture. In some cases, the structural element of fig. 8 or 9 is split into a plurality of structural elements. As previously described with reference to fig. 9, the embodiment shown in fig. 10 is an implementation that receives an ATM communication data stream provided over the UTOPIA bus, implements shaping functions, and transmits the communication stream along with control information to the airlink physical layer. One skilled in the art will recognize the fact that the present system and method is not limited to this particular embodiment, as described above.
The main difference between the "burst" implementation shown in fig. 10 and the limited implementation shown in fig. 9 is the implementation of the logical FIFO. More complex control logic is used to permit a overbooking of real storage space that is much larger than what can be achieved with a fixed size queue. In the "burst" communication flow solution shown in fig. 10, the k FIFOs are implemented as a linked list of pointers to memory space, in contrast to the fixed size circular queue of pointers to memory space in fig. 9. Although the allowed size of the linked list is also limited in practice, the sum of all the limited linked list sizes is larger than the available storage space, i.e. over-determined. The linked list size is managed in a two-phase mode; two counts are maintained for each linked list. The first count represents the guaranteed availability of storage space. The safest implementation would limit the total amount of guaranteed available capacity to the actual available storage space, but a strict analysis of the actual communication flow patterns indicates that a proper over-reservation of guaranteed availability may be allowed. The second count limits the maximum capacity that a particular linked list is allowed to reach. As in the limited communication flow solution, PDUs 1001 of individual channels/priorities are inserted into the storage RAM 1005A. However, for a "burst" traffic solution, two available memory address bank FIFOs are maintained. The bank 1003C1 is referred to as a fixed buffer set. A fixed buffer set is a pool from which guaranteed available memory addresses can be fetched, the size of which is consistent with the total amount of guaranteed capacity. The library 1003C2 is referred to as a dynamic buffer group. The dynamic buffer group is of any size, and functions as a superscalar. The other elements in fig. 10 are functionally identical to the corresponding elements in fig. 9, as described above.
The store and forward operation of fig. 10 will be described in detail below. According to a preferred embodiment, PDU 1001 is an ATM cell that includes a pre-pending priority indicator and a temporary remote channel ID ("TRPI") 1002. The ingress demultiplexer 1003A decodes the pre-pending priority index and TRPI to decide which of the k logical FIFOs the PDU should be sent to. The record store implements a sub-function of the demultiplexer 803 of fig. 8, which performs the logic of the flow chart of fig. 11 and selects an address from one of the two buffer banks 1003C1 or 1003C2, the flow chart of fig. 11 being described below. The record store 1003B then adds a record to the linked list 1013 that includes a time stamp 1004 and a flag indicating from which of the libraries 1003C1 or 1003C2 the address was retrieved. The record store 1003B updates the pointer to the head of the linked list 1005B and updates the PDU count 1006. The payload portion of PDU 1001 is stored into memory RAM 1005A. Scheduling processor 1007 examines count 1006 and time stamp 1004 and forms a local bid request. The local bid request is considered along with the remote bid request to decide how to allocate bandwidth 1008. The scheduling processor represents the bandwidth allocation of PDUs transmitted from the local system in the form of an outgoing (egress) sequence table 1010. The issue sequencer 1011A directs the discard/retrieve multiplexer 1011B to retrieve the PDU from which memory address. When each PDU is retrieved, its memory address is returned to the appropriate buffer pool 1003C1 or 1003C 2.
Attention is now directed to FIG. 11, in which a flow chart depicts logic used by the record store 1003B to select an address from the two buffers 1003C1 or 1003C2 shown in FIG. 10. A PDU arrives in step 1101 and the record store machine decodes its channel and priority information to decide which of the k FIFO banks should be used for temporary storage of the incoming PDU. If the fixed count is less than the maximum allowed value of the fixed count, step 1103, an address is fetched from the fixed store, step 1104, the fixed count is incremented, step 1105, the fixed address is stored in the queue, step 1106, and the linked list is updated, step 1112. If the fixed count is not less than the maximum allowable fixed count value at step 1103, then a dynamic count decision is made at step 1107. If the dynamic count is not less than the maximum dynamic count value in step 1107, the PDU is discarded in step 1111. If the dynamic count is less than the maximum allowable value for the dynamic count at step 1107, then a determination is made at step 1108 as to whether the dynamic library is empty. If the dynamic library is not empty, via step 1108, an address is fetched from the dynamic library, via step 1109, the dynamic count is incremented, via step 1110, and the dynamic address is stored in the queue, via step 1106, and the linked list is updated, via step 1112.
While the present invention has been described in terms of exemplary embodiments, it is not limited thereto. Rather, the appended claims should be construed broadly, to include other forms and embodiments of the invention which may be practiced by those skilled in the art without departing from the scope and range of equivalents of the invention.
Claims (26)
1. A point-to-multipoint (PTMP) system comprising at least one hub station and at least one remote station, wherein allocation of available bandwidth between stations is determined in accordance with an aggregate demand for available bandwidth by all remote stations in the system, irrespective of individual needs of each remote station by respective users of said remote stations.
2. The system of claim 1, wherein:
the communication channel between stations is designed to be made up of a plurality of frames, each frame including a forward link portion and a reverse link portion; and is
The ratio of the forward link portion to the reverse link portion of each frame is assigned independently of the ratio of the forward link portion to the reverse link portion of the other frames.
3. The system of claim 1 wherein a communication channel between said stations is formed by a plurality of Protocol Data Units (PDUs), said system further comprising:
a first scheduler section for allocating a certain number of PDUs to each of the communication channels, respectively; and
a second scheduler portion for allocating available bandwidth between the communication channels in accordance with the number of allocated PDUs.
4. The system of claim 3, wherein:
the first scheduler part distributes a certain number of PDUs to each communication channel according to the total demand of all remote stations for available bandwidth; and is
The second scheduler section assigns the available PDUs to the allocated PDUs based on priorities assigned to the available PDUs.
5. The system of claim 4 wherein said second scheduler section further assigns available PDUs to said allocated PDUs based on time stamps assigned to each of said available PDUs.
6. A system as recited in claim 3 wherein said second scheduler portion provides a status of available PDUs assigned to said first scheduler portion.
7. The system of claim 1, wherein said system is a Time Division Multiple Access (TDMA) system.
8. A method for allocating bandwidth in a point-to-multipoint (PTMP) system comprising at least one hub station and at least one remote station, said method comprising the step of allocating available bandwidth between stations in accordance with the aggregate demand for available bandwidth by all remote stations of said system, irrespective of the individual demand for each remote station by respective users of said remote stations.
9. The method of claim 8 wherein said PTMP system is a Time Division Multiple Access (TDMA) system.
10. The method of claim 8, wherein the communication channel between the stations is formatted to comprise a plurality of frames, each frame including a forward link portion and a reverse link portion, the method further comprising the step of assigning a ratio of forward link portion to reverse link portion of each frame independently of a ratio of forward link portion to reverse link portion of other frames.
11. The method of claim 8 wherein a communication channel between said stations is formed by a plurality of Protocol Data Units (PDUs), said method further comprising the steps of:
distributing a certain number of PDUs to each communication channel; and
allocating available bandwidth in each communication channel according to the number of allocated PDUs.
12. The method of claim 11 wherein the quantity of PDUs is allocated to each communication channel based on the total demand for available bandwidth by all remote stations, the method further comprising the step of assigning each available PDU to an allocated PDU based on the priority assigned to the available PDU.
13. The method of claim 12, further comprising the step of assigning available PDUs to the allocated PDUs according to a time stamp assigned to each available PDU.
14. The method of claim 11, further comprising the step of providing status of assigned available PDUs.
15. The method of claim 14, further comprising the step of allocating available bandwidth according to the status of assigned available PDUs.
16. An improvement is made in a Time Division Multiple Access (TDMA) system having management capabilities for communication between a central station and remote stations, wherein allocation of available bandwidth between stations is determined based on the overall demand for available bandwidth by all remote stations in the system, without regard to the individual needs of each remote station by the individual users of the remote stations.
17. The system of claim 16, wherein:
the communication channel between stations is designed to be made up of a plurality of frames, each frame including a forward link portion and a reverse link portion; and is
The ratio of the forward link portion to the reverse link portion of each frame is assigned independently of the ratio of the forward link portion to the reverse link portion of the other frames.
18. The system of claim 16, wherein a communication channel between the stations is formed by a plurality of Protocol Data Units (PDUs), the system further comprising:
a first scheduler section for allocating a respective number of PDUs to each communication channel in accordance with a total demand for available bandwidth by all remote stations; and
a second scheduler section for allocating available bandwidth in each communication channel in accordance with said number of allocated PDUs, and for assigning said available PDUs to said allocated PDUs in accordance with priorities assigned to each available PDU.
19. The system of claim 18, wherein:
said second scheduler portion providing status of assigned available PDUs to said first scheduler portion; and is
Available bandwidth is allocated according to the status of the assigned available PDUs.
20. A computer readable medium having embodied thereon a program for causing a processor to allocate bandwidth in a point-to-multipoint (PTMP) system comprising at least one hub station and at least one remote station, said computer readable medium comprising means for causing said processor to allocate available bandwidth between stations according to an aggregate demand for available bandwidth by all remote stations in said system without regard to individual demands of each remote station by individual users of said remote stations.
21. The computer readable medium of claim 20, wherein the communication channel between the stations is designed to be formed of a plurality of frames, each frame including a forward link portion and a reverse link portion, the computer readable medium further comprising means for causing the processor to allocate the ratio of the forward link portion to the reverse link portion of each frame independently of the ratio of the forward link portion to the reverse link portion of other frames.
22. A computer-readable medium in accordance with claim 20, wherein a communication channel between stations is formed of a plurality of Protocol Data Units (PDUs), said computer-readable medium further comprising:
means for causing said processor to allocate a number of PDUs to each communication channel; and
means for causing the processor to allocate available bandwidth in each communication channel based on the number of allocated PDUs.
23. The computer readable media of claim 22 wherein said plurality of PDUs is allocated based on the total demand for available bandwidth by all remote stations, and further comprising means for causing said processor to assign available PDUs to said allocated PDUs based on the priority assigned to each available PDU.
24. A computer readable medium in accordance with claim 23, further comprising means for causing said processor to assign available PDUs to said allocated PDUs in accordance with a time stamp assigned to each of said available PDUs.
25. The computer-readable medium of claim 22, further comprising means for causing the processor to provide a status of assigned available PDUs.
26. The computer-readable medium of claim 25, further comprising means for causing the processor to allocate available bandwidth based on the status of assigned available PDUs.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US60/266,475 | 2001-02-06 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| HK1064532A true HK1064532A (en) | 2005-01-28 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN1498472A (en) | Systems and methods for real-time adaptive capacity scheduling | |
| US6438134B1 (en) | Two-component bandwidth scheduler having application in multi-class digital communications systems | |
| CN1981484B (en) | Pipeline scheduler with multiple scheduling lanes and scheduling method used therein | |
| US7027457B1 (en) | Method and apparatus for providing differentiated Quality-of-Service guarantees in scalable packet switches | |
| US6879561B1 (en) | Method and system for wireless packet scheduling with per packet QoS support and link adaptation | |
| US6747976B1 (en) | Distributed scheduling architecture with efficient reservation protocol and dynamic priority scheme for wireless ATM networks | |
| JP4881887B2 (en) | Traffic shaping functions and equipment | |
| US8442063B1 (en) | System and method for scheduling unicast and multicast traffic | |
| WO2001069852A2 (en) | Data rate limiting | |
| CA2285243A1 (en) | Hierarchical packet scheduling method and apparatus | |
| WO2003071740A1 (en) | A method of priority control in wireless packet data communications | |
| WO2002073865A2 (en) | Time based packet scheduling and sorting system | |
| US20040120276A1 (en) | Scheduling system and method for a burst switch | |
| US7477636B2 (en) | Processor with scheduler architecture supporting multiple distinct scheduling algorithms | |
| EP1867112B1 (en) | Assigning resources to items such as processing contexts for processing packets | |
| EP2134037B1 (en) | Method and apparatus for scheduling data packet flows | |
| US7245624B2 (en) | Processor with table-based scheduling using software-controlled interval computation | |
| CN101061681B (en) | Communication time fair transmission management without explicit traffic specifications for wireless networks | |
| US20240385876A1 (en) | Self-clocked round robin scheduler | |
| US7602797B2 (en) | Method and apparatus for request/grant priority scheduling | |
| HK1064532A (en) | System and method for real time adaptive capacity scheduling | |
| US7215675B2 (en) | Processor with software-controlled programmable service levels | |
| HK1064233A (en) | Dynamic bandwidth allocation | |
| JP4750331B2 (en) | Method of packet allocation in a wireless communication system | |
| Yu et al. | A Multiple Access Protocol for Multimedia Transmission over 5G Wireless Asynchronous Transfer Mode Network |