HK1165148A - Retransmission admission mechanism in a managed shared network with quality of service - Google Patents
Retransmission admission mechanism in a managed shared network with quality of service Download PDFInfo
- Publication number
- HK1165148A HK1165148A HK12105792.5A HK12105792A HK1165148A HK 1165148 A HK1165148 A HK 1165148A HK 12105792 A HK12105792 A HK 12105792A HK 1165148 A HK1165148 A HK 1165148A
- Authority
- HK
- Hong Kong
- Prior art keywords
- node
- network
- flow
- nodes
- retransmission
- Prior art date
Links
Description
Cross Reference to Related Applications
This application claims priority from united states provisional patent application No. 61/145,181, filed on 16/1/2009, the entire contents of which are incorporated herein by reference.
Technical Field
The methods and apparatus disclosed herein relate generally to communication networks and, more particularly, some embodiments relate to retransmission mechanisms that reduce packet error rates.
Background
A home network may include a variety of subscriber devices configured to deliver subscriber services within a home. These subscriber services include the delivery of multimedia content, such as streaming audio and video, over a home network to a subscriber device where it is presented to the subscriber. As the number of available subscriber services increases and they become more popular, the number of devices connected within each home network also increases. The increased number of services and devices increases the complexity of coordinating communications between network nodes.
The home network will typically present quality of service (QoS) parameters to ensure that the user content is delivered under the desired quality criteria. This may help ensure user satisfaction in the experience. For example, MoCA (multimedia over coax alliance) networks employ a central network controller to set parameterized quality of service (PQoS) standards for network communications between nodes. Generally, a flow of resource-guaranteed, unidirectional packets, identified by a "flow ID", sent from one node to one or more other nodes is generally considered a parameterized quality of service (PQoS) flow.
Some communication networks have an optional retransmission mechanism to reduce packet error rates. Because there is a time interval between the transmission of a packet and the acknowledgment of receipt, the transmitter must keep each transmitted packet in its buffer until the packet is acknowledged or a predetermined amount of time has elapsed indicating that the packet has been received. For some managed shared networks with quality of service, this may be the length of a MAP (media access control) cycle for the network. In addition, the receiver must arrange the packets in the order they were received and provide them to the application (i.e., the function or higher software layer) in the proper sequence. If a packet is corrupted, the receiver must wait for retransmission of the packet or for a timeout before being able to provide those packets following the corrupted packet to its application. While waiting for retransmission or time out, those packets must be held in the receiver's buffer.
Disclosure of Invention
In accordance with various embodiments of the disclosed method and apparatus, a node on a network (also referred to as a network device) is programmed to generate and submit a request to a network controller to initiate a PQoS transaction to establish or update a parameterized quality of service (PQoS) flow with packet retransmissions. In accordance with one embodiment, to enable retransmission of PQoS flows, a setup/grant procedure is provided to ensure that appropriate node-level resources (including transmit and receive buffers and associated processing power in the transmitter and receiver) and network-level resources (e.g., network slots) are allocated. According to one embodiment of the disclosed method and apparatus, an approval process is performed before initiating retransmission to ensure compatibility. The methods and apparatus disclosed herein extend the PQoS approval protocol in the MoCA 1.1 standard formulated and published by the multimedia over coax alliance. The PQoS approval protocol is extended to support the approval of PQoS flows that employ retransmission for lost or damaged packets to achieve better packet error rates than flows without retransmission mechanisms.
In one embodiment of the disclosed method and apparatus, retransmission of lost/damaged packets uses additional network-level and node-level resources (cache space, processing power, and management) beyond what is contained in the traffic specification (TSpec) provided in MoCA 1.1, which defines both network-level and node-level resources. Accordingly, in addition to the TSpec parameter, a parameter called "NUMBER _ RETRY" is introduced in the PQoS approval process. NUMBER _ RETRY defines the maximum NUMBER of retransmissions requested for lost/damaged packets, and indicates that retransmission is not requested when its value is "0". Retransmissions are enabled and disabled based on PQoS flows. According to one embodiment of the disclosed method and apparatus, only unicast PQoS flows employ retransmissions. The methods and apparatus disclosed herein are often referred to as MoCA as an exemplary application. However, the methods and apparatus disclosed herein may be applicable to any other network having a coordinated network controller. These systems and methods may be used in conjunction with networks that retransmit but experience large packet acknowledgement delays at layer 2, requiring large buffer space.
According to various embodiments, a method of retransmission approval in a MoCA network having a network controller node and a plurality of associated network nodes comprises the operations of: receiving at a network controller node a request to establish or update a parameterized quality of service to support delivery of a flow with retransmissions (sometimes referred to as a retransmitted flow); the network controller node sending a message to a first plurality of nodes in the network requesting information from the first plurality of nodes regarding whether the retransmission stream can be established or updated; the network controller node receiving responses from the first plurality of nodes, wherein each response includes information regarding whether the corresponding node can support the retransmission stream; and the network controller node determining whether the retransmission stream can be supported by the first plurality of nodes. Additionally, in some embodiments, the network controller informs the network node whether the retransmission stream can be supported. In one embodiment, the information about whether a node can support a retransmission stream includes packet size, peak packet rate, burst characteristics, and buffer space available to support retransmission.
Furthermore, in yet another embodiment, the decision in operation is based on the sufficiency of the ingress node resources, the sufficiency of the egress node resources, the number of supported retransmission streams, the burst size supported by the node, and the sufficiency of the accumulated time slots in the network. In yet another embodiment, the decision is based on the overhead of the flow as an indicator of the specific bandwidth required to support the retransmission flow. In one embodiment, the overhead of a flow is obtained as follows:
wherein N isTXPSIs the total number of retransmissions per second of the stream, NFIs the number of bytes transmitted per stream, OFDMBIs the number of bits per OFDM (orthogonal frequency division multiplexing) symbol, TCPIs the length of the cyclic prefix, TFFTIs the IFFT/FFT (inverse fast Fourier transform/fast Fourier transform) period, TIFGIs the IFG period, and TPREIs the length of the preamble for each packet.
In one embodiment, the network controller informs nodes of the network whether retransmission flows can be supported and then the network nodes for the retransmission flows dial up the required resources to support the retransmission flows. The submission to the network controller may be configured to identify nodes comprising the first plurality of nodes and may include information indicative of network nodes comprising the first plurality of network nodes.
Other features of the disclosed method and apparatus will be apparent from the following detailed description and from the accompanying drawings, which illustrate, by way of example, features of embodiments of the disclosed method and apparatus. The summary herein is not intended to limit the scope of the invention, which is defined solely by the appended claims.
Drawings
The disclosed methods and apparatus are described in detail below, in accordance with one or more embodiments, in conjunction with the following figures. The drawings are only for purposes of illustrating exemplary embodiments or examples of particular embodiments. These drawings are provided solely to assist the reader in understanding the disclosed methods and apparatus and should not be construed as limiting the breadth and scope of the claimed invention. It should be noted that for convenience and clarity of illustration, the drawings are not necessarily drawn to scale.
FIG. 1 illustrates an example of an environment in which some embodiments of the disclosed methods and apparatus may be implemented.
Fig. 2 illustrates an example of a flow of retransmission approval in accordance with the systems and methods described herein.
Fig. 3 illustrates an example flow of information exchanged for retransmission approval in accordance with the systems and methods described herein.
Fig. 4 shows an example of a flow of retransmission approval according to the example of the flow of information shown in fig. 3.
FIG. 5 illustrates a flow of computing a response to a level 2 management unit (L2ME) frame according to one embodiment of the systems and methods described herein.
FIG. 6 illustrates an exemplary computing module that may be used to implement features of embodiments of the disclosed systems and methods.
The drawings are not intended to be exhaustive or to limit the disclosed methods and apparatus to the forms disclosed. It is understood that the disclosed method and apparatus may be practiced with modification and alteration. The scope of the invention is to be defined only by the claims and equivalents thereof.
Detailed Description
According to various embodiments of the disclosed method and apparatus, a node on a network (also referred to as a network device) is programmed to generate and submit a request to a network controller for setting/approving a parameterized QoS flow with retransmission requirements. According to one embodiment, a setup/grant procedure is provided to ensure that the transmitter and receiver are assigned the appropriate transmit buffer and associated processing power and receive buffer and associated processing power, respectively. According to one embodiment of the disclosed method and apparatus, an approval procedure is implemented to ensure compatibility before initiating retransmission.
Before describing the disclosed methods and apparatus in detail, it is helpful to describe an environment in which the disclosed methods and apparatus can be practiced. The network of fig. 1 will be described below for purposes of illustration. A wired communications medium 100 is shown. In some embodiments, the wired communications medium may be a coaxial cable system, a power line system, a fiber optic cable system, an ethernet cable system, or other similar communications medium. Alternatively, the communication medium may be a wireless transmission system. In the illustrated embodiment, the communication medium 100 is a pre-installed coaxial cable deployed within a residence 101.
The network includes a plurality of nodes 102, 103, 104, 105, and 106 that communicate according to a communication protocol. For example, the communication protocol may include a network standard, such as the multimedia over coax alliance (MoCA) standard. In the illustrated embodiment, the communication protocol specifies a packet-based communication system. In this embodiment, a physical layer (PHY) packet includes a preamble and a payload. The PHY preamble is typically located at the beginning of each packet to assist the receiver in detecting and acquiring physical layer parameters to correctly decode the packet. The communication protocol may have multiple predefined PHY preambles for use with different types of network communications. For example, when transmitting in diversity mode (a communication mode in which information about the communication channel is rarely known), one type of preamble may be employed. When transmitting medium access control (MAP) information, another type of preamble may be employed. Other types of packets may employ other types of preambles.
The PHY payload is used to convey the data content of the packet. In some cases, the PHY payload has a predetermined format. For example, in a MoCA network, each of the network maintenance information and MAP information has a format specified by the MoCA protocol. In other cases, the PHY payload may have an indeterminate format. For example, the PHY payload of the media stream may include an embedded ethernet packet or a portion thereof.
In some embodiments, activities on the network are controlled by a Network Controller (NC) node. In one such embodiment, one of the nodes is selected to perform the functions of the network controller based on the procedures defined by the communication protocol. In networks employing network controllers, the network controllers utilize MAPs to arrange communications between network nodes. The MAP is transmitted as a packet. Such MAP packets are transmitted periodically. The MAP is generated based on a subscription request of the network node. When a new node requests approval of the network, the network controller also performs an approval process.
The nodes described herein are associated with various devices. For example, in a system deployed within a residence 101, a node may be a network communications module connected to a computer 109 or 110. Such nodes enable computers 109 and 110 to communicate over communication medium 100. Alternatively, the node may be a module connected to the television 111 that allows the television to receive and display media from one or more network nodes. The nodes may also be connected to speakers or other media playback devices 103 that play music. The node may be connected to a module configured to interface with an internet or cable service provider 112, for example, to provide ad hoc access, digital video recording, media streaming functionality, or network management services to the home 101.
In one embodiment of the MoCA environment, any node with layer 2 management unit (L2ME) capability can initiate a PQoS transaction in the network. A network coordinator or Network Controller (NC) node is responsible for approving PQoS flows to the MoCA network by requesting resource utilization information from each node. The approval may be made with or without enabling the retransmission function. If there are sufficient resources to admit the flow, the network controller node guarantees to the flow that sufficient transmission opportunities are available. If there are not enough resources, the network controller node rejects the requested flow and provides information about the remaining resources. PQoS flow transactions for MoCA devices can be divided into two main groups. One group is the grant control PQoS flow transaction, which includes: establishing a PQoS flow transaction; updating the PQoS flow transaction; and deleting the PQoS flow transaction. Another set is a flow management PQoS flow transaction, which includes: listing PQoS flow transactions; querying the PQoS flow transaction; and maintaining PQoS flow transactions.
In one embodiment, a PQoS flow may be transmitted from one ingress node to a single or multiple egress nodes. An ingress node is a node where a PQoS flow enters the network. The egress node is the node where the parameterized quality of service flow leaves the network. Note that a PQoS flow having a plurality of egress nodes is transmitted using the Greatest Common Divisor (GCD) physical layer (PHY) scheme (also referred to as a broadcast scheme) of the ingress node. GCD is a modulation format that is computed by a node for transmission to multiple receiving nodes. The GCD PHY is the modulation density employed for a group of subcarriers, the maximum constellation density supported by all subcarriers selected for all nodes on the network. For PQoS flows transmitted in GDC PHY, the ID of the egress node is preferably set to 0x 3F.
The systems and methods described herein may be used to establish and update stream transactions. Accordingly, some embodiments define a create or update flow transaction to create a new PQoS flow or to update the properties of an existing PQoS flow. One exemplary application of the update stream transaction is to change the properties of the stream in response to starting/stopping trick mode playback. Another example is to change the properties of the stream in response to changes in the available bandwidth of the MoCA network. In one embodiment, any node may be configured to request creation or updating of a PQoS flow. In the embodiment described herein, both the setup and update PQoS flow transactions are performed within three L2ME waves. The information exchanged in these transactions will be described in detail below.
Before describing an example of a flow in detail, an overview of the flow is described. Fig. 2 illustrates an exemplary flow of retransmission settings in accordance with one embodiment of the systems and methods described herein. Referring to fig. 2, in operation 204, a node in the network submits information to set up a retransmission protocol. For example, an ingress node submits a request to establish or update a PQoS flow. In one embodiment, the request is sent to a network controller or coordinator node of the network. Fig. 2 is described in terms of one embodiment employing a network controller node.
In operation 210, the network controller sends a PQoS flow request to a network node. The request includes a request to obtain information from the nodes, the information being material that supports retransmission in the network. This includes, for example, determining whether the nodes support requests for PQoS flows with retransmissions when necessary, determining whether there is sufficient buffer size and processing capacity, and so on. In one embodiment, this is a broadcast to all network nodes. In yet another embodiment, the requesting node identifies nodes participating in the PQoS flow, and the network controller then sends its request to the identified nodes.
In operation 214, the requested node responds with the requested information to the network controller. In one embodiment, the response of a node includes the following information: whether the node can support the flow with retransmission, whether it has sufficient buffer size and processing capacity, and whether it is an ingress or egress node.
In operation 218, the network controller receives information from the requested node. Once the information is received, the network controller evaluates the information and notifies one or more nodes in the network of the results. In one embodiment, only the ingress node is notified of the result. In other embodiments, multiple or all nodes are notified of the result. Next, in step 222, the notified nodes issue their responses to end the transaction.
Fig. 3 illustrates an example of a data flow for retransmission approval in accordance with one embodiment of the systems and methods described herein. Fig. 4 shows an example of the retransmission approval process shown in fig. 3. An example of this flow is described below in conjunction with fig. 3 and 4. In this example, the network of nodes 307 includes a network controller 305. The ingress node 303 requests the establishment or update of a PQoS flow. Accordingly, in operation 362, ingress node 303 submits a request to network control node 305 for information regarding establishment or updating of a PQoS flow for retransmission. To begin establishing or updating PQoS flows in a MoCA network, the ingress node preferably sends a "submit L2 ME" frame to the network controller. Preferably, the following additional constraints are observed for the fields in the "submit L2 ME" frame:
●VENDOR_ID=0x0(MoCA)
●TRANS TYPE=0x1(QoS)
● TRANS _ success ═ 0x1 (establish PQoS flow)0x2 (update PQoS flow)
● WAVEO _ NODEMASK is set to indicate all L2 ME-capable nodes in a MoCA network
●MSG PRIORITY=0xF0
●TXN LAST WAVE NUM=2---
● L2ME PAYLOAD as shown in Table 1
Table 1 provides a submission 310 as an example of establishing and updating the L2ME payload of a PQoS flow.
TABLE 1 delivery of L2ME loads
In one embodiment, a retransmission flow is not approved unless the node can support consecutive packet units with packet sizes at the peak packet rate, given the burst characteristics and with enough buffer space to support retransmission. Accordingly, in the above example, T _ PACKET _ SIZE, T _ PEAK _ DATA _ RATE, T _ BURST _ SIZE, and MAX _ NUMBER _ RETRY are used to determine whether a flow can be supported. The node preferably does not admit a PQoS flow unless it supports consecutive PDUs of T _ PACKET _ SIZE under T _ PEAK _ DATA _ RATE in the event that the BURST characteristics are controlled by T _ BURST _ SIZE and have sufficient buffer space for retransmission (when requested). The injection bit rate for a PQoS flow is defined as the total number of bits in the (T _ BURST _ SIZE +1) last received PDUs of the PQoS flow divided by the total time it takes to transmit these PDUs to the MoCA ECL of the ingress node. The injected PDU rate for a PQoS flow is defined as (T _ BURST _ SIZE +1) divided by the time it takes to transfer (T _ BURST _ SIZE +1) PDUs to mocaceae.
In one embodiment, for each approved flow, each node is preferably able to maintain the PQoS flow as long as the following requirements are met:
the injected bit RATE is always less than or equal to the T _ PEAK _ DATA _ RATE of the PQoS flow.
The injected PDU RATE is always less than or equal to the T _ PEAK _ DATA _ RATE/T _ PACKET _ SIZE for the PQoS flow.
All injected PDUs have a length less than or equal to T _ PACKET _ SIZE.
In one embodiment, the attributes of a PQoS flow are described by TSPEC parameters and MAX _ NUMBER _ RETRY requirements. An example of how all other nodes in a MoCA network use these parameters is described below. These include a description of the network controller node and how it determines whether to grant establishment of the requested PQoS flow with or without retransmissions.
In operation 363, the network controller node 305 initiates wave 0 for the procedure. In one MoCA embodiment, wave 0311 is initiated with request L2ME frame 312. In this example, wave 0311 informs all nodes 303 and 307 about the proposed PQoS flow setup or update, and gathers information about the current flow allocation from nodes 303 and 307. The network controller node 305 initiates a wave 0311 with a request L2ME frame having a format and based on the submission 310 shown in table 1.
At operation 365, nodes 303 and 307 respond to the request. In the illustrated example of wave 0311, each node responds to the network controller node 305 with an L2ME response frame. In one embodiment, the response accounts for the sum of the overhead of the existing PQoS flows. The response L2ME frame for the establish PQoS flow/update flow transaction conforms to the L2ME format. In this example, the following additional constraints are observed:
● RESP STATUS is set to "1" Bit 0 "
● L2ME PAYLOAD as defined in Table 2
Table 2 is an example of L2ME _ PAYLOAD for L2ME response frames used to establish PQoS flows and update flows.
TABLE 2L 2ME response load
Fig. 5 illustrates an example of computing a response to an L2ME frame according to one embodiment of the systems and methods described herein. In this example, at operation 391, each requested node computes the payload of a "response L2 ME" frame by computing the value of extraction _ STPS, which is the sum of the flow overheads of all EXISTING PQoS flows except for the new or updated PQoS flow, for which the node is the ingress node. The contribution of each PQoS flow is the flow overhead and is calculated in one embodiment using equation 1 below. In operation 392, each node calculates an EXISTING _ TXPS value for all EXISTING PQoS flows except for new or updated PQoS flows. This is the sum of the COST _ TXPS of each PQoS flow that has that node as the ingress node.
In operation 393, the node calculates the COST _ STPTX parameter as the CoF of the new or updated PQoS flow, expressed as a multiple of the SLOT _ TIMES/PQoS flow transmission, according to equation (1). At operation 394, if there is an ingress or egress NODE limit on the PQoS flow throughput, the NODE calculates the remaining NODE CAPACITY in kilobits/second (REM _ NODE _ capability) as defined in table 2. At operation 395, the node calculates a RETRANSMISSION request OK based on the available cache size and cache control logic. Next, at operation 396, if there is an ingress or egress node limit on the PQoS flow throughput, the remaining BURST capability (REM _ BURST _ SIZE _ RETRANSMISSION) is calculated.
Each requested node issues a RESPONSE _ CODE, with a list of acceptable values as shown in table 3. If the node rejects the request to establish/update the flow, a plurality of RESPONSE _ CODEs are selected, the highest value RESPONSE _ CODE from all the selected RESPONSE _ CODEs is selected and included in the wave 0L2ME RESPONSE message.
If a node is able to satisfy the network controller node's request, it issues a response code of 0x1 or 0x 2.
Table 3-examples of response code values
Next, the network controller node 305 initiates a wave 1313. In wave 1, the network controller node 305 decides to establish a PQoS flow or update the results of a PQoS flow transaction and the values of other fields of the request information. This is illustrated by operation 368. Upon determination, the network controller node 305 informs these nodes of the decision regarding the PQoS flow setup/update request. This is shown by operation 371. This may be broadcast to all nodes or sent to a particular node in the network.
One example of how the network controller node 305 calculates these values and makes a decision to approve or deny the request to establish or update the flow is now described. In this example, the network controller node 305 sends a request L2ME frame for wave 1 using the format shown in table 4. The following additional constraints are observed in the fields.
VENDOR ID=0x0(MoCA)
TRANS_TYPE=0x1(QoS)
TRANS success ═ 0x1 (establish PQoS flow)0x2 (update PQoS flow)
WAVE STATUS=0
DIR LEN=0x00
TXN WAVE N=0x1--
Table 4 shows L2ME PAYLOAD.
TABLE 4-L2ME load examples
The DECISION field provides results from the ingress node regarding the request to establish or update a PQoS flow, as determined by the network controller node 305. Table 5 below shows an example of the meaning of possible values of this field as defined in an embodiment of MoCA.
TABLE 5-List of examples of decision values
If the update PQoS flow operation fails, in one embodiment, the existing PQoS flow still retains its current TSPEC parameters.
From the approved RESPONSE _ CODE values shown in table 3, if the node returns one of the RESPONSE _ CODEs listed in the first column of table 6, the request for wave 1L 2ME frame must contain the corresponding decision shown in table 6. If the node returns more than one response code value as shown in table 3, the network controller node 305 may select the decision value shown in table 6 corresponding to the returned response code value.
TABLE 6 non-Bandwidth related responses
| Response code name | non-Bandwidth decision name |
| RESPONSE_CODE_FLOW_EXISTS | DECISION_FLOW_EXISTS |
| RESPONSE_CODE_TOO_MANY_FLOWS | DECISION_TOO_MANY_FLOWS |
| RESPONSE_CODE_INVALID_TSPEC | DECISION_INVALID_TSPEC |
| RESPONSE_CODE_INVALID_DA | DECISION_INVALID_DA |
| RESPONSE_CODE_LEASE_EXPIRED | DECISION_LEASE_EXPIRED |
In one embodiment, the network controller node 305 is configured to evaluate bandwidth related metrics before approving the establishment or updating of a PQoS flow. Examples of such indices are shown in table 7.
TABLE 7 Bandwidth-related indices
In one embodiment, the network controller node is configured such that if all of the conditions in table 8 are met, it must approve or update the PQoS flow.
TABLE 8 conditions
In one embodiment, in wave 0, the network controller node 305 does not have to send a response to itself, which calculates and utilizes the values of the fields as shown in table 2 when deciding whether to approve or reject the request to establish/update the flow. The network controller node 305 denies the request to establish or update the flow if one or more of the above five conditions are not met.
In the illustrated example, the network controller node 305 communicates the above decision in the following manner. In wave 1, the network controller node sends a DECISION to the participating nodes approving flow establishment or update within a request L2ME frame of DECISION.
If the rejection of the setup or update request is due to a non-bandwidth related reason listed in table 6, the network controller node sends a request L2ME frame to the participating nodes in wave 1 by taking the appropriate value in the DECISION field. If any of the bandwidth related metrics is not reached, the network controller node calculates a MAX _ PEAK _ DATA _ RATE value in the payload within the request frame, which is the maximum value of the allowed PQoS flows T _ PEAK _ DATA _ RATE that can succeed at a given T _ PACKET _ SIZE. When a flow is rejected for bandwidth related reasons, the network controller node indicates the following conditions listed in table 9 at bytes 3:0 of the BW _ LIMIT _ INFO field.
TABLE 9 conditions
For MoCA 1.1, examples of QOS _ STPS and QOS _ TXPS values for different numbers of nodes in a MoCA network employed by a network controller node are shown in Table 10.
TABLE 10 relationship of QOS _ STPS and QOS _ TXPS values to node number in MoCA networks
Note that adding a new node to a MoCA network with 2 to 5 nodes results in reduced QOS _ STPS and QOS _ TXPS thresholds, as shown in table 10 (as an example) for later PQoS flow setup or update calculations. If approval of a new node results in the actual total SLOT _ TIMEs/second or total flow delivery/second used by all PQoS flows exceeding the new QOS _ STPS and QOS _ TXPS thresholds, there is no need to delete or update the existing PQoS flows.
Upon receiving a request L2ME frame indicating a successful establishment or update of a PQoS flow, a node of the PQoS flow allocates the requested resources for the established or updated flow. This is shown in step 374. At operation 377, the node replies with a response L2ME frame with the format. In one embodiment, the following additional restrictions are observed in each field.
RESP STATUS is set to "1" by Bit 0 "
L2ME PAYLOAD-32 bit type III reservation
In operation 380, the network controller node 305 initiates a wave 2315. In one embodiment, this is initiated by a wave 2 request using an L2ME frame. Wave 2 informs ingress node 303 and other related nodes 307 that the requested transaction has completed.
The following additional restrictions are preferably observed for each field:
●VENDORJD=0x0(MoCA)
●TRANS_TYPE=0x1(QoS)
● TRANS _ success ═ 0x1 (establish PQoS Flow)0x2 (update PQoS Flow)
●DIR LEN=0x10
●TXN WAVE N=0x2--
● L2MePAYLOAD (connected) type with grammatical rules, connected
● from wave 1.
The establish PQoS flow/update PQoS flow transaction is complete when the nodes provide their final response L2ME frames. This is shown at operation 384. The following additional restrictions for each field are preferably observed.
RESP STATUS "don't care". This field is reserved type II.
L2ME _ PAYLOAD ═ 32 bits are reserved as type III specified in the MoCA 1.1 specification.
In some cases, adding nodes to a MoCA network may reduce the ability of the network controller node to guarantee delivery time for all scheduled PQoS flows. For PQoS flow setup or update transactions, the network controller node decides whether a particular PQoS flow request can be granted.
Various embodiments utilize flow overhead to decide whether a request should be granted. Flow overhead (CoF) is an indicator of the specific bandwidth required to support a given PQoS flow. For establishing or updating PQoS flow transactions, the CoF is computed by the ingress node or network controller node that uses this information to decide whether to approve the requested PQoS flow. For integer values of AFACTOR (see table 10), CoF (the number of SLOT TIMES per second required for the flow) is calculated as follows:
equation (1)
The first term on the right of the equation is the number of OFDM symbol/stream transmissions, where | X | is an integer resulting from rounding X. The CoFnew value obtained from above is the CoF of the new or updated flow. The overhead for all the existing N flows for each ingress node is calculated by summing the CoF values of each flow for which the node is the ingress node.
In some embodiments, the CoF calculation faces certain limits for non-integer values of AFACTOR. Examples of such limitations are: (1) for AFACTORI < AFACTOR2, (for AFACTORI's stream overhead) should be 2': (flow overhead for AFACTOR 2); (2) the flow overhead obtained by the vendor proprietary CoF calculation is less than or equal to the CoF obtained by the calculation according to equation 1 using AFACTOR ═ 1.
The meanings of the variables used to the right of the above equation are described in table 11.
TABLE 11-parameters for calculating flow overhead
In one embodiment, the decision to aggregate flow packets is made by the ingress node on its own, based on, for example, availability of transmission time for the MoCA network and the fullness of its input PQoS flow buffer. When a PACKET of data arrives at the ingress node's Ethernet Convergence Layer (ECL) from outside the MoCA network, the ingress node selects T _ PACKET _ SIZE for the flow to reflect the SIZE of the PACKET of data, and should not attempt to influence the ingress node's aggregation decision by a T _ PACKET _ SIZE value that is different from the expected SIZE of the PACKET of data that enters the ingress node ECL. The overhead of buffering resources to support retransmission of a stream at a node is detected by the node in an appropriate manner.
As used herein, the term module may describe a given functional unit that may be implemented in accordance with one or more embodiments of the disclosed method and apparatus. As used herein, a module may be implemented in any form of hardware, software, or combination thereof. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logic components, software programs, or other mechanisms that may implement a module. In practice, multiple modules described herein may be implemented as separate modules, or the functions and features described may be shared, in part or in whole, among one or more modules. That is, as will be apparent to one of skill in the art upon reading this description, various features and functions described herein may be implemented in any given application, in various combinations and permutations, in one or more separate or shared modules. Even though individual features or elements of functions may be described separately or defined in separate modules in the claims, a person of ordinary skill in the art will understand that such features and functions may be shared by one or more general purpose software and hardware elements, and such description should not require or imply that separate hardware or software components are used to implement such features and functions.
In one embodiment, the components or modules of the disclosed methods and apparatus are implemented in whole or in part in software, which may be implemented to operate with computing or processing modules capable of performing the described functions. One such example of a computing module is shown in fig. 6. Many embodiments are described in terms of this exemplary module 400. After reading this description, it will be apparent to one skilled in the relevant art that the disclosed methods and apparatus can be implemented using other computing modules or architectures.
Referring to fig. 6, computing module 400 may represent, for example, computing or processing capabilities available in: desktop, laptop or notebook computers; handheld computing devices (PDAs, smartphones, mobile phones, palm computers, etc.); a mainframe, supercomputer, workstation or server; or any other type of special purpose or general purpose computing device as may be desired or appropriate for a given application or environment. A computing module may also represent the computing power embedded in or available to a given device. For example, the computing module 400 may be obtained from electronic devices such as digital cameras, navigation systems, mobile phones, portable computing devices, modems, routers, Wireless Access Points (WAPs), terminals, and other electronic devices that may include some form of processing capability.
The computing module 400 may include, for example, one or more processors, controllers, control modules, or other processing devices, such as a processor 404. Processor 404 may be implemented using a general-purpose or special-purpose processing engine such as, for example, a microprocessor, controller or other control logic. In the illustrated example, processor 404 is connected to bus 402, although any communication medium may be used to enable interaction with other components of computing module 400 or with the outside.
The computing module 400 may also include one or more memory modules, such as a main memory 408. Such as a Random Access Memory (RAM) or other dynamic memory, for storing information or instructions to be executed by processor 404. Main memory 408 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 404. Computing module 400 may include a Read Only Memory (ROM) or other static storage device coupled to bus 402 for storing static information and instructions for processor 404.
Computing module 400 may also include one or more various forms of information storage mechanism 410, which may include, for example, a media drive 412 and a storage unit interface 420. The media drive 412 may comprise a drive or other mechanism to support fixed or removable storage media 414. For example, a hard disk drive, a floppy disk drive, a magnetic tape drive, an optical disk drive, a CD or DVD (read only or writable) drive, or other removable or fixed media drive. Accordingly, storage media 414 may include, for example, a hard disk, floppy disk, magnetic tape, film, optical disk, CD or DVD, or other fixed or removable medium that is readable, writable, or accessed by media drive 412. As these examples illustrate, the storage media 414 may include a computer usable storage medium having stored thereon computer software or data.
In alternative embodiments, information storage mechanism 410 may include other similar means for allowing computer programs or other instructions or data to be loaded into computing module 400. Such means may include, for example, a fixed or removable storage unit 422 and an interface 420. Examples of such a storage unit 422 and interface 420 may include a program cartridge and cartridge interface, a removable memory (e.g., a flash memory or other removable memory module) and memory slot, a PCMCIA slot and card, and other fixed or removable storage units 422 and interfaces 420 that may allow software and data to be transferred from the storage unit 422 to the computing module 400.
The computing module may also include a communications interface 424. Communication interface 424 may be used to allow software and data to be transferred between computing module 400 and external devices. Examples of communication interfaces may include a modem or soft modem, a network interface (such as Ethernet, network interface card, WiMedia, IEEE 802.XX, or other interface), a communication port (such as a USB port, an IR port, RS232-port BluetoothAn interface or other port) or other communication interface. Software and data transferred via communications interface 424 are typically carried in signals which may be electronic, electromagnetic (including optical) or other signals capable of being exchanged by a given interface 424. These signals may be provided to communications interface 424 via a channel 428. The channel may carry signals that may be implemented using a wired or wireless communication medium. Some examples of channels include MoCA channels over coaxial cable, telephone lines, mobile phone connections, RF connections, optical connections, network interfaces, local or wide area networks, and other wired or wireless communication channels.
In this document, the terms "computer program medium" and "computer usable medium" generally refer to physical storage media such as main memory 408, storage unit 420, and media 414. These and other various forms of computer program storage media or computer-usable storage media may be involved in storing and providing one or more sequences of instructions to a processing device for execution. Such instructions on the medium are often referred to as "computer program code" or "computer program product" (which may be grouped in the form of computer programs or other groupings). When executed, such instructions cause the computing module 400 to perform the features and functions of the disclosed methods and apparatus.
While various embodiments of the disclosed method and apparatus have been described above, it should be understood that they have been presented by way of example only, and not limitation. Likewise, the various diagrams may illustrate exemplary architectures or other configurations of the disclosed methods and apparatus that are useful for understanding the features and functionality that may be included in the disclosed methods and apparatus. The claimed invention is not limited to the exemplary architectures or configurations shown, but rather, the desired features can be implemented in various alternative architectures and configurations. Indeed, it will be apparent to one of ordinary skill in the art how alternative functional, logical or physical partitions and configurations can be implemented to implement the desired features of the disclosed methods and apparatus. Also, various other component module names than those shown here may be applied to the different partitions. In addition, to the extent that flow diagrams, functional descriptions, and method claims do not follow, the order in which the blocks are presented should not be limited to the various embodiments which perform the recited functionality in the same order, unless the context clearly dictates otherwise.
While the disclosed methods and apparatus have been described above in terms of various exemplary embodiments and implementations, it should be understood that the various features, aspects, and functions described in one or more of the various embodiments are not limited in their application to the particular embodiments described, but may be applied, alone or in various combinations, to one or more of the other embodiments of the disclosed methods and apparatus, whether or not such embodiments are described, and whether or not such features are presented as being a part of a described embodiment. Thus, the breadth and scope of the claimed invention should not be limited by any of the above-described embodiments.
Unless otherwise expressly stated, the terms and phrases used herein, and variations thereof, are to be construed as open ended as opposed to limiting. As an example of the foregoing: the term "including" is to be understood as "including but not limited to" or the like; the term "example" is used to give an illustrative example of the item in question, and not an exclusive or limiting list; the terms "a" or "an" are to be understood as meaning "at least one", "one or more", or the like; and adjectives such as "conventional," "traditional," "generic," "standard," "known," and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given moment, but instead should be read to encompass conventional, typical, generic, or standard technologies that may be available or known at any time, whether presently or in the future. Similarly, when reference is made herein to techniques that are apparent or known to those skilled in the art, such techniques encompass those that are apparent or known to those skilled in the art now or at any time in the future.
The presence of such expansive words and phrases, such as "one or more," "at least," "but not limited to," or other similar terms in some instances should not be construed to mean that a narrowing is intended or required in instances where such expansive terms may not be present. The use of the term "module" does not imply that the components or functionality described or claimed as part of the module are configured within a common package. Rather, any or all of the various components of a module, whether control logic or other components, may be combined in a single package or maintained separately, and may also be distributed in multiple groupings or packages or at multiple locations.
In addition, various embodiments described herein are described in terms of exemplary block diagrams, flow charts and other illustrations. As will be apparent to those of skill in the art upon reading this disclosure, the illustrated embodiments and their various alternatives may be practiced without limitation to the illustrated examples. For example, block diagrams and their associated descriptions should not be construed as limited to a particular architecture or configuration.
Claims (22)
1. A retransmission approval method within a MoCA network having a network controller node and a plurality of associated network nodes, the method comprising:
a) receiving in the network controller node a submission requesting establishment or updating of a parameterized quality of service flow with retransmissions;
b) the network controller node sending information to a first plurality of nodes in the network requesting information from the first plurality of nodes regarding whether the retransmission stream can be established or updated;
c) receiving, by the network controller node, responses from the first plurality of nodes, wherein each response includes information regarding whether its corresponding node can support the retransmission stream; and
d) the network controller node determines whether the retransmission stream can be supported by the first plurality of nodes.
2. The method of claim 1, further comprising: the network controller informs the network node whether the flow with retransmission can be supported.
3. The method of claim 1, wherein the information regarding whether a node can support a flow with retransmissions comprises packet size, peak packet rate, burst characteristics, available processing capacity to support retransmissions, and buffer space.
4. The method of claim 1, wherein the decision in operation d) is based on a sufficiency of bandwidth at the ingress node, a sufficiency of bandwidth at the egress node, a number of retransmission streams supported, a burst size supported by the node, and a sufficiency of an accumulated gap time.
5. The method of claim 1, wherein the decision in operation d) is based on a flow overhead, which is an indicator of the specific bandwidth required to support the flow with retransmissions.
6. The method of claim 5, wherein the flow cost is calculated according to the following equation:
wherein N isTXPSIs the total number of stream transmissions per second, NFIs the number of bytes transmitted per stream transmission, OFDMBIs the number of bits per OFDM symbol, TCPIs the length of the cyclic prefix, TFFTIs the IFFT/FFT period, TIFGIs the IFG period, and TPREIs the length of the preamble for each packet.
7. The method of claim 1, further comprising: the network controller informs nodes of the network whether the flow with retransmission can be supported.
8. The method of claim 1, further comprising: the network node for the flow with retransmission allocates the requested resources to support the flow with retransmission.
9. The method of claim 1, wherein the submission received in operation a) identifies a node that includes the first plurality of nodes.
10. The method of claim 1, wherein the request in operation b) is a broadcast by the network controller to all nodes in the network.
11. The method of claim 1, wherein the submission received by the network controller includes information indicating network nodes that make up the first plurality of nodes.
12. The method of claim 1, wherein operation d) comprises: the network controller broadcasts the received information about the communication capacity of the plurality of nodes to all nodes in the network.
13. A system, comprising:
a first node on a communication network, the first node comprising a first processor and first computer-executable program code on a first computer-readable medium, the first computer-executable program code configured to generate a submission to a network control node on the network to request establishment or update of a parameterized quality of service to support retransmission flows; and
a network control node on the communication network, the network control node comprising a second processor and second computer-executable program code on a second computer-readable medium, the second computer-executable program code configured to cause the network control node to: receiving the request to establish or update a parameterized quality of service to support delivery of a retransmission flow; sending a message to a first plurality of nodes in the network requesting information from the first plurality of nodes regarding whether the retransmission stream can be established or updated; receiving responses from the first plurality of nodes, wherein each response includes information of whether its corresponding node can support the retransmission stream; and determining whether the retransmission stream can be supported by the first plurality of nodes.
14. The system of claim 13, wherein the operations performed by the network control node further comprise: informing a network node whether the retransmission flow can be supported.
15. The system of claim 13, wherein the information regarding whether a node supports flows with retransmissions comprises packet size, peak packet rate, burst characteristics, available processing capacity to support retransmissions, and buffer space.
16. The system of claim 13, wherein the decision is based on a sufficiency of bandwidth at the ingress node, a sufficiency of bandwidth at the egress node, a number of retransmission streams supported, a burst size supported by the node, and a sufficiency of accumulated gap time.
17. The system of claim 13, wherein the decision is based on a flow overhead that is an indicator of a particular bandwidth required to support the flow with retransmissions.
18. The system of claim 17, wherein the flow cost is calculated according to the following equation:
wherein N isTXPSIs the total number of stream transmissions per second, NFIs the number of bytes transmitted per stream transmission, OFDMBIs the number of bits per OFDM symbol, TCPIs the length of the cyclic prefix, TFFTIs the IFFT/FFT period, TIFGIs the IFG period, and TPREIs the length of the preamble for each packet.
19. The system of claim 13, wherein the operations performed by the network control node further comprise: informing nodes of the network whether the retransmission flow can be supported.
20. The system of claim 13, wherein the network node for the retransmission flow is configured to allocate the requested resources to support the retransmission flow.
21. The system of claim 13, wherein the submission to the network controller indicates the network nodes comprising the first plurality of nodes.
22. The system of claim 13, wherein the submission received by the network controller includes information indicating network nodes that make up the first plurality of nodes.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US61/145,181 | 2009-01-16 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| HK1165148A true HK1165148A (en) | 2012-09-28 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN102282813B (en) | Mechanism is permitted in the repeating transmission had in the managed shared network of service quality | |
| US10594566B2 (en) | Method and apparatus for layer 2 discovery in a managed shared network | |
| US8352569B2 (en) | Full mesh rates transaction in a network | |
| US8416685B2 (en) | Flexible reservation request and scheduling mechanisms in a managed shared network with quality of service | |
| US20110113146A1 (en) | Dynamic quality of service (qos) setup over wired and wireless networks | |
| JP2009536005A (en) | Dynamic service quality pre-authorization in a communication environment | |
| US20120100879A1 (en) | Multi-bearer rate control for transporting user plane data | |
| US20200137077A1 (en) | Method and apparatus for supporting sub networks in a moca network | |
| JP2023515606A (en) | Information processing method, device, equipment and computer-readable storage medium | |
| WO2022142657A1 (en) | Message transmission method and message transmission system, and storage medium | |
| CN102938743B (en) | Data transmission method and device | |
| CN101632268B (en) | Parameterized quality of service architecture in a network | |
| HK1165148A (en) | Retransmission admission mechanism in a managed shared network with quality of service | |
| CN120238907A (en) | QoS processing method, device, readable medium and equipment based on non-3GPP access | |
| CN104244433B (en) | The multiple access method of the quality of service guarantee of time delay tolerance, device and system | |
| CN117202271A (en) | A bandwidth adjustment method, device and storage medium | |
| US8861357B2 (en) | Method and apparatus for communicating unicast PQoS DFID information | |
| WO2026012092A1 (en) | Air interface configuration method and related apparatus | |
| CN120238972A (en) | Service quality processing method, device, computer readable medium and electronic device | |
| CN103548326A (en) | Method and apparatus for quality-of-service (QoS) management | |
| CN119071917A (en) | A bandwidth adjustment method and network device | |
| CN105450549A (en) | Method, device and system for optimizing user equipment QoE in broadband access network | |
| WO2020097793A1 (en) | Data transmission method and device | |
| CN110661592A (en) | Signaling transmission method and system, centralized unit, distribution unit, and storage medium |