[go: up one dir, main page]

WO2019161928A1 - Methods, nodes and system for a multicast service - Google Patents

Methods, nodes and system for a multicast service Download PDF

Info

Publication number
WO2019161928A1
WO2019161928A1 PCT/EP2018/054648 EP2018054648W WO2019161928A1 WO 2019161928 A1 WO2019161928 A1 WO 2019161928A1 EP 2018054648 W EP2018054648 W EP 2018054648W WO 2019161928 A1 WO2019161928 A1 WO 2019161928A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
multicast
nodes
message
neighbor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/EP2018/054648
Other languages
French (fr)
Inventor
Xun XIAO
Zoran Despotovic
Chenghui Peng
Artur Hecker
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to PCT/EP2018/054648 priority Critical patent/WO2019161928A1/en
Publication of WO2019161928A1 publication Critical patent/WO2019161928A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/185Arrangements for providing special services to substations for broadcast or conference, e.g. multicast with management of multicast group membership
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/16Multipoint routing

Definitions

  • the present invention generally relates to the field of communication network technology. Particularly, the present invention relates to methods, network nodes and system for a multicast control plane. In particular, the present invention relates to a node for requesting and receiving a multicast service in a network. The present invention further relates method for requesting and receiving a multicast service by a node in a network. The present invention further relates to a node for supporting a multicast service in a network of a plurality of nodes. Moreover, the present invention relates to a method for supporting a multicast service in a network of a plurality of nodes. The present invention further relates to a system that comprises a plurality of nodes supporting a multicast resource-to-resource protocol and computer program.
  • BACKGROUND Multicast is a method of providing the same data to a plurality of network nodes in a communication network. Multicast is already supported in many existing communication systems. In fact, there are many IP multicast protocols proposed, standardized and deployed. In the following the notation (S, G) represents a multicast group where S refers to the IP address of the multicast source, and G refers to the particular multicast group IP address.
  • Protocol independent multicast originally proposed by Cisco, is widely accepted as a standard way for IP multicast. There are three variants of this multicast protocol available.
  • PIM-DM PIM dense mode
  • PIM-DM Protocol Independent Multicast - Dense Mode
  • RFC Editor Protocol Specification (Revised), RFC Editor, 2005.
  • PIM-DM assumes that almost all possible subnets have at least one receiver wanting to receive the multicast traffic from the source, so the network is flooded with traffic on all possible branches, then pruned back when branches do not express an interest in receiving the packets.
  • PIM-DM allows a routing device to use any unicast routing protocol and performs reverse path forwarding (RPF) checks using the unicast routing table.
  • RPF reverse path forwarding
  • PIM-DM has an implicit join message, so routing devices use the flood-and-prune method to deliver traffic everywhere and then determine where the uninterested receivers are.
  • PIM-DM uses source-based distribution trees in the form (S, G).
  • PIM-SM PIM sparse mode
  • PIM-SM Independent Multicast - Sparse Mode
  • PIM-SM Protocol Specification (Revised), RFC Editor, 2016.
  • PIM-SM assumes that very few of the possible receivers want packets from each source, so the network establishes and sends packets only on branches that have at least one leaf indicating (by message) an interest in the traffic.
  • PIM-SM has an explicit join message, so routing devices determine where the interested receivers are and send join messages upstream to their neighbors, building trees from receivers to the rendezvous point (RP) where an RP routing device is used as the initial source of multicast group traffic and therefore builds distribution trees in a wildcard form (*, G) where the asterisk (*) indicates that the state applies to any multicast source sending to group G.
  • RP rendezvous point
  • PIM-SSM PIM source-specific multicast
  • SSM PIM source-specific multicast
  • DVMRP Distance Vector Multicast Routing Protocol
  • S, G source- based distribution trees in the form (S, G), and builds its own multicast routing tables for reverse path forwarding (RPF) checks.
  • RPF reverse path forwarding
  • MOSPF adds an explicit join message in the original OSPF, so routing devices do not have to flood their entire domain with multicast traffic from every source.
  • MOSPF uses source-based distribution trees in the form (S, G) as well.
  • the current multicast group addressing scheme (i.e. the multicast IP address identifying a multicast group) is inflexible.
  • Each multicast group will be assigned its multicast IP address from the range of 224.0.0.0 to 239.255.255.255.
  • any receiver has to know the IP address of the multicast group in advance so as to initialize a request to join in the group. This assumes an additional discovery mechanism in the system.
  • allocations and collections of multicast group addresses have to be carefully managed.
  • a routing protocol When a routing protocol is running, it periodically collects the network topology information and figures out the next- hop information to reach every other nodes in the network. For example, OSPF (Multicast Open Shortest Path First) collects information of the network topology and calculates 1-to-N shortest paths. Every routing table entry tells the next-hop information of the shortest path toward a specific network segment. Therefore, for a multicast protocol at the upper layer, there is only one way that a receiver can send its request for joining in a multicast group.
  • OSPF Multicast Open Shortest Path First
  • the route given by the routing protocol is the best path to the multicast source, it is not necessarily the best path to reach the multicast distribution overlay. For instance, there could be another node nearby already being a forwarding node in the multicast overlay and the distance to the receiver is much closer than the distance to the multicast source, but the receiver does not know that. Therefore, multi-layer design blocks the path diversity when creating the multicast distribution overlay since other possible ports are simply ruled out when calculating unicast routing overlay. This obviously delays the multicast overlay construction.
  • the multi-layer protocol design also causes longer recovery time because the multicast distribution overlay can be recovered only if the underlying routing overlay recovers back. This will be amplified by using link-state routing protocols that need to propagate network changes across the whole network domain. Waiting for the re-convergence of the routing protocol further delays the recovery of multicast overlay that strictly relies on the routing information. The root reason is also the lack of path diversity for the routing layer itself. Clearly, once the path is down or invalid, a node has to figure another path but with global network information, which take time to converge.
  • a desirable multicast solution should preferably have the following features.
  • the multicast addressing should be flexible and informational.
  • a multicast protocol should let any interested host know the multicast group address easily, which is tightly related to the interests of the multicast group.
  • the multicast protocol should have abundant path options when creating the multicast distribution overlay. In other words, any host/receiver should be able to quickly join in a multicast group.
  • the multicast distribution overlay should also be resilient to network churn. For the latter two goals, this calls for a new routing protocol design considering both unicast and multicast.
  • an object of the present invention is to provide a method and a node for requesting a multicast service, a method and a node for supporting a multicast service and a system for requesting and supporting a multicast service.
  • the invention relates to a node for requesting and receiving a multicast service in a network comprising a plurality of nodes in accordance to a multicast resource-to-resource protocol.
  • the node having a local neighbor set comprises one or more neighbor nodes of the node.
  • the node is configured to:
  • a node requesting a multicast service informs a subset of its neighbor nodes about the request and receives a number of node IDs that either provide or lead to a node that provides the requested service.
  • This has the advantage that only local information is used and abundant path options are available when creating the multicast distribution overlay. Therefore, any host/receiver is be able to quickly join in a multicast group.
  • the multicast distribution overlay is also be resilient to network churn.
  • the node is further configured to receive a fourth message in response to the third message and adding the selected candidate node to the local neighbor set.
  • the local neighbor set is updated with a new virtual node. This has the advantage that the candidate node will be considered in further iteration steps.
  • the node is further configured to iteratively repeat the above steps.
  • the requesting node iteratively connects to a node that provides the requested multicast service.
  • This has the advantage that no global routing information is required.
  • the rule to select the candidate node comprises selecting i) a node that is the source of multicast service, ii) a node that is already involved in the multicast service, or iii) a node having an ID close to the ID of a multicast source.
  • the rule for selecting the candidate node further comprises taking into account that same candidate information is coming from different one or more neighbor nodes.
  • the node is further configured to receive sixth messages from one or more neighbor nodes and updating the local neighbor set based on the sixth messages.
  • the local neighbor set is updated. This has the advantage that information required for decentralized routing is collected and maintained.
  • the node is further configured to send sixth messages to one or more physical or virtual neighbor nodes periodically or when connecting to a node not being in the local neighbor set or when disconnecting to a node being in the local neighbor set.
  • the multicast service is identified by a multicast ID, the multicast ID comprising a source ID and a group ID.
  • the node is further configured to receive fourth or fifth messages containing subscriber information and a particular multicast ID and updating an entry in a local channel table based on the subscriber information and the particular multicast ID.
  • the node is further configured to receive a seventh message comprising a subscriber ID and the multicast ID, and deleting an entry in the local channel table based on the subscriber ID and the multicast ID.
  • a method for requesting and receiving a multicast service by a node in a network comprising a plurality of nodes in accordance to a multicast resource-to-resource protocol.
  • the method comprises the following steps:
  • a node requesting a multicast service informs a subset of its neighbor nodes about the request and receives a number of node IDs that either provide or lead to a node that provides the requested service.
  • This has the advantage that only local information is used and abundant path options are available when creating the multicast distribution overlay. Therefore, any host/receiver is be able to quickly join in a multicast group.
  • the multicast distribution overlay is also be resilient to network churn.
  • the method comprises receiving a fourth message in response to the third message and adding the selected candidate node to the local neighbor set.
  • the local neighbor set is updated with a new virtual node.
  • the method comprises iteratively repeating the previous steps.
  • the requesting node iteratively connects to a node that provides the requested multicast service.
  • This has the advantage that no global routing information is required.
  • the rule for selecting the candidate node comprises selecting i) a node that is the source of multicast service, ii) a node that is already involved in the multicast service, or iii) a node having an ID close to the ID of a multicast source.
  • the rule for selecting the candidate node further comprises taking into account that same candidate information is coming from different one or more neighbor nodes.
  • the node is further configured to receive sixth messages from one or more neighbor nodes and updating the local neighbor set based on the sixth messages. Thereby, the local neighbor set is updated. This has the advantage that information required for decentralized routing is collected and maintained.
  • the node is further configured to send sixth messages to one or more physical or virtual neighbor nodes periodically or when connecting to a node not being in the local neighbor set or when disconnecting to a node being in the local neighbor set.
  • the multicast service is identified by a multicast ID, the multicast ID comprising a source ID and a group ID.
  • the node is further configured to receive fourth or fifth messages containing subscriber information and a particular multicast ID and updating an entry in a local channel table based on the subscriber information and the particular multicast ID.
  • the node is further configured to receive a seventh message comprising a subscriber ID and the multicast ID, and deleting an entry in the local channel table based on the subscriber ID and the multicast ID.
  • the invention relates to a node for supporting a multicast service in a network of a plurality of nodes in accordance to a multicast resource-to-resource protocol.
  • the node has a local neighbor set comprising one or more neighbor nodes of the node.
  • the node is configured to do the following:
  • a request for a multicast service is handled by neighbor nodes iteratively.
  • This has the advantage that no global routing information is required for connecting the requesting node to a node that can provide the multicast service.
  • path establishment is a dynamic process, which can be repeated at any time to take network changes into account.
  • abundant path options are available when creating the multicast distribution overlay. Therefore, any host/receiver is be able to quickly join in a multicast group.
  • the rule for selecting the one or more candidate nodes comprises selecting i) a node that is the source of multicast service, ii) a node that is already involved in the multicast service, or iii) a node having an ID close to the ID of a multicast source.
  • the node is further configured to receive sixth messages from one or more neighbor nodes and updating the local neighbor set based on the sixth messages.
  • the local neighbor set is updated. This has the advantage that information required for decentralized routing is collected and maintained.
  • the node is further configured to send sixth messages to one or more physical or virtual neighbor nodes periodically or when connecting to a node not being in the local neighbor set or when disconnecting to a node being in the local neighbor set.
  • the multicast service is identified by a multicast ID
  • the multicast ID comprises a source ID and a group ID.
  • the node is further configured to receive fourth or fifth messages containing subscriber information and a particular multicast ID and updating an entry in a local channel table based on the subscriber information and the particular multicast ID.
  • the node is further configured to receive a seventh message comprising a subscriber ID and the multicast ID, and deleting an entry in the local channel table based on the subscriber ID and the multicast ID.
  • a method for supporting a multicast service in a network of a plurality of nodes in accordance to a multicast resource- to-resource protocol. The method comprises the following steps:
  • a request for a multicast service is handled by neighbor nodes iteratively.
  • This has the advantage that no global routing information is required for connecting the requesting node to a node that can provide the multicast service.
  • path establishment is a dynamic process, which can be repeated at any time to take network changes into account.
  • abundant path options are available when creating the multicast distribution overlay. Therefore, any host/receiver is be able to quickly join in a multicast group.
  • the rule for selecting the one or more candidate nodes comprises selecting i) a node that is the source of multicast service, ii) a node that is already involved in the multicast service, or iii) a node having an ID close to the ID of a multicast source.
  • the method further comprises receiving sixth messages from one or more neighbor nodes and updating the local neighbor set based on the sixth messages.
  • the local neighbor set is updated. This has the advantage that information required for decentralized routing is collected and maintained.
  • the method further comprises sending sixth messages to one or more physical or virtual neighbor nodes periodically or when connecting to a node not being in the local neighbor set or when disconnecting to a node being in the local neighbor set.
  • the multicast service is identified by a multicast ID
  • the multicast ID comprises a source ID and a group ID.
  • the method comprises receiving fourth or fifth messages containing subscriber information and a particular multicast ID and updating an entry in a local channel table based on the subscriber information and the particular multicast ID.
  • the method further comprises receiving a seventh message comprising a subscriber ID and the multicast ID, and deleting an entry in the local channel table based on the subscriber ID and the multicast ID.
  • the forwarding information in each node that forwards the multicast service is deleted. This has the advantage that no global routing information is required.
  • the invention provides a system comprising a plurality of nodes supporting a multicast resource-to-resource protocol.
  • the system comprises a first node according to the first aspect of the invention, the first node is configured to subscribe to a multicast service.
  • the system comprises a plurality of second nodes according to the third aspect of the invention, each of the plurality of second nodes is configured to be iteratively virtually connected to the first node so as to provide a path to a third node that is either a source node of the multicast service or a node that is already involved in the multicast service, the third node and the plurality of second nodes are configured to provide the multicast service to the first node over the path.
  • a complete system is defined that includes the functionality of both the requesting node and the intermediate node.
  • This has the advantage that a multicast service is routed to a requesting node without requiring global routing information. It further leads to a self- stabilizing connection path.
  • abundant path options are available when creating the multicast distribution overlay. Therefore, any host/receiver is be able to quickly join in a multicast group.
  • the multicast distribution overlay is also be resilient to network churn.
  • the invention relates to computer program having a program code for performing the method according to the second or fifth aspect, when the computer program runs on a computing device.
  • the method can be performed in an automatic and repeatable manner.
  • the computer program can be respectively performed by the network entity according to the first aspect or by the user equipment according to the fourth aspect.
  • the above apparatuses may be implemented based on a discrete hardware circuitry with discrete hardware components, integrated chips or arrangements of chip modules, or based on a signal processing device or chip controlled by a software routine or program stored in a memory, written on a computer-readable medium or downloaded from a network such as the internet. It shall further be understood that a preferred embodiment of the invention can also be any combination of the dependent claims or above embodiments with the respective
  • FIG. 1 shows an extended OVS node architecture block diagram according to an embodiment of the invention.
  • Fig. 2 shows an exemplary network and message exchange according to an embodiment of the invention.
  • Fig. 3 shows further message exchanges in the network according to the embodiment of Fig. 2.
  • Fig. 4 shows a local neighbor set of a node after successful message exchanges according to an embodiment of the invention.
  • Fig. 5 shows a flow rule composition of an extended OVS node according to an embodiment of the invention.
  • Fig. 6 shows an alternative flow rule composition of an extended OVS node according to an embodiment of the invention.
  • Fig. 1 shows an extended Open VSwitch (OVS) architecture 100 for supporting control plane multicast according to an embodiment of the present invention.
  • OVS is a multilayer virtual switch designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols.
  • the extended OVS node architecture 100 consists of two main parts: a kernel part and a userspace part, which are graphically separated by the dotted line.
  • the kernel part comprises a kernel datapath module 110, which implements a packet forwarding engine responsible for per-packet lookup, modification and forwarding.
  • the OVS kernel according to the embodiment of the present invention is further extended to support control plane (CP) network function (NF) multicast communications.
  • CP control plane
  • NF network function
  • the userspace part comprises an ovsdb-server 120, which is responsible for storing information about the configuration of the switch.
  • the userspace part further comprises an ovs-vswitchd module 130 as a local daemon, which can modify the kernel part by modifying, for example, the flow rule entries in a flow table.
  • the flow table could then be a normal routing table comprising a set of data routing rules to be applied to the incoming data packets and a set of control routing rules to be applied to the incoming control packets.
  • the local daemon also exposes the external interface in order to allow the extended OVS node to communicate with a remote SDN controller through the depicted control port.
  • extended OVS node architecture 100 further comprises a resource control agent (RCA) module 132, which is locally implemented inside the ovs-vswitchd module 130.
  • RCA module 132 extends the functionalities of the local daemon (i.e., the ovs- vswitchd module 130) and allows the extended OVS node to support a Resource-to- Resource multicast (R2Rm) protocol as defined by the present invention.
  • R2Rm Resource-to- Resource multicast
  • the local daemon has been chosen because it has already been used successfully as a local agent responsible for many basic control tasks such as implementing the forwarding logic including media access control (MAC) learning, load balancing over bonded interfaces and communicating with the external SDN controller using the OpenFlow protocol.
  • MAC media access control
  • the extended local daemon can support the multicast routing of the present invention.
  • the proposed R2Rm protocol utilizes different messages, comprising:
  • - mcNotifyNb A message that a node sends to notify another node a candidate of helping to join in a multicast service
  • - mcNotifyNbAck A message to inform a node to establish a virtual connection between a node and the candidate
  • Fig. 2 shows an exemplary network comprising nodes 210, 222, 224, 226, 232, 234 and 236 according to an embodiment of the present invention.
  • the nodes may be the extended OVS node within a SDN scenario as described above. It should be noted that the network of Fig. 2 is for illustration purpose only and the network may contain more or less nodes and may have different connections between the nodes.
  • Each node 210, 222, 224, 226, 232, 234 and 236 of the network has a connection to one or more neighbor nodes. As depicted in the exemplary network of Fig. 2, node 210 is connected to nodes 222, 224 and 226.
  • the connections may be physical connections or virtual connections. In case of a physical connection, the nodes are directly connected by e.g. a cable. In case of virtual connections, the connection is established via one or more intermediate nodes.
  • Each node maintains a local neighbor set, which comprises the physical or virtual neighbors of the node.
  • the local neighbor set may be a data structure stored in the resource control agent (RCA) module 132 of Fig. 1.
  • the local neighbor set 211 of node 210 comprises nodes 222, 224 and 226.
  • the local neighbor set 225 of node 224 comprises nodes 210, 232, 234 and 236.
  • node 210 may be a node requesting a multicast service. Such node is also referred to as a subscriber or a requesting node. It has to be noted that each node in the network can be a requesting node 210 for a multicast service. Joining a multicast service requires the requesting node 210 to explicitly initialize a request. This operation is done by the requesting node 210 sending a mcJoin message SI to its neighbor nodes. As described above, the neighbor nodes are organized in the local neighbor set that is maintained by each node. The requesting node can selectively send the mcJoin message SI to all neighbor nodes or only to a subset of the neighbor nodes of the local neighbor set.
  • a requesting node only sends mcJoin messages to its physical neighbors, i.e. the neighbors to whom it has physical connections.
  • the subscriber can broadcast its interest to a multicast channel to both physical and virtual neighbors. It has to be noted that sending a mcJoin message does not require any convergence of the routing overlay but connections to some neighbors, even if the connections are direct links.
  • the requesting node 210 sends the mcJoin message SI to node 224.
  • Node 224 is referred to as an intermediate node, because it is considered to support the requesting node by requesting and receiving the multicast service.
  • the requesting node 210 receives in response to the mcJoin message SI a mcNotifyNb message S2 from its neighbor node(s). Each mcNotifyNb message S2 indicates one or more candidate nodes that have been selected by the neighbor node(s).
  • the requesting node 210 selects a selected candidate node 232 among the indicated candidate nodes that is adapted to support the subscriber in requesting and receiving the multicast service.
  • the selection may be based on a rule.
  • the rule may define a priority. In one embodiment, the rule defines three types of candidate nodes. A node is considered as a candidate node of first type if the node is a source of the multicast service.
  • a node is considered as a candidate node of second type, if the node is already involved in the multicast service. Being involved in the multicast service means that the node is a subscriber of the multicast service or is an intermediate node that forwards the multicast service (also referred to as a forwarding node).
  • the requesting node may receive the multicast service from one of the first or the second types of candidate nodes.
  • a candidate type of first type has a higher priority as a candidate type of the other types.
  • the requesting node 210 receives mcNotifyNb messages S2 from its neighbor node(s) indicating a node that is a source of the multicast service (candidate node of first type) and a node that is a subscriber of the multicast service or it is an intermediate node that forwards the multicast service (candidate node of second type), the requesting node 210 will select the node that is a source of the multicast service (candidate node of first type).
  • a candidate node of first type has the same priority as a candidate type of second type.
  • a candidate node of first type may selectively have a lower priority that a candidate node of second type but a higher priority than a candidate node of third type. This may be the case if the candidate node of second type is determined to have, e.g., a lower load than the candidate type of first type.
  • a candidate node of third type is a node that is neither the multicast source nor an intermediate node forwarding multicast traffic but a node that can help the requesting node to get closer to a node that provides the multicast service.
  • a candidate node of third type may be the peer node that is observed to have the closest Resource-to-Resource (R2R) protocol ID value to the multicast source.
  • R2R protocol as found in PCT EP 2017/053042, uses distinct unsigned integers to identify all network nodes (i.e. assigning to each of them an integer ID) and builds an ordered ring-based structured routing overlay based on the node ID values. Every node will have connections/paths to its logical neighbors, i.e.
  • every node can also have connections/paths to some finger nodes whose ID values are remote and across the logical ring structure.
  • the R2R protocol creates a logical routing overlay for routing unicast traffic over the network.
  • the requesting node 210 receives in response to the mcJoin message SI a mcNotifyNb message S2 from the intermediate node 224 indicating candidate node 232.
  • Candidate node 232 may be a candidate node of third type. Inasmuch as there are neither candidate nodes of first type nor candidate nodes of second type indicated by other neighbor nodes, the requesting node 210 may select node 232 as selected candidate node.
  • the requesting node 210 In response to receiving mcNotifyNb messages S2 and selecting a selected candidate node 232, the requesting node 210 sends a mcNotifyNbAck message S3 to the intermediate node 224 which has sent the mcNotifyNb message indicating the selected candidate node.
  • the mcNotifyNbAck message S3 is for instruction the intermediate node 224 to virtually connect the subscriber node and the selected candidate node as virtual neighbor nodes.
  • the requesting node 210 By virtually connecting to the selected candidate node, the requesting node 210 extends its local neighbor set by other nodes that supports the requesting node 210 by requesting and receiving the multicast service.
  • a node supporting the requesting node may be a source of the multicast service, a node that is involved in the multicast service or a node that helps the requesting node 210 to get closer to a node that provides the multicast service. This has the advantage that global routing information is not required. Instead, only local neighbor information is used.
  • the requesting node 210 sends the mcNotifyNbAck S3 message to intermediate node 224 inasmuch as node 232 has been selected by the requesting node 210 as selected candidate node and node 232 was indicated in the mcNotifyNb message S2 from intermediate node 224.
  • the requesting node 210 receives a mcPathEst message S4 from the immediate node 224 indicating that the requesting node 210 and the selected candidate node 232 are virtual neighbor nodes.
  • the mcPathEst message S4 is hop-by-hop forwarded via forwarding nodes that will provide the connection between intermediate node 223 and subscriber 210 so as to inform each forwarding node that it is part of the multicast distribution.
  • the mcPathEst message also lets the forwarding nodes update its channel table that stores information about multicast services and associated subscribers.
  • intermediate node 224 sends a further mcPathEst message S5 to the selected candidate that is hop-by-hop forwarded via forwarding nodes that will provide the connection between the selected candidate node 232 and intermediate node 224 so as to inform each forwarding node that it is part of the multicast distribution.
  • Intermediate node 224 finally joins the path from the requesting node 210 and the path from the selected candidate node 232 such as to virtually connect subscriber 210 and the selected candidate node.
  • Intermediate node 224 is then also a forwarding node for that multicast service inasmuch as it is part of connection path between the requesting node 210 and the selected candidate node 232.
  • the steps of sending mcJoin messages SI, receiving mcNotifyNb messages S2, selecting a selected candidate node, sending a mcNotifyNbAck message S3 indicating the selected candidate node and receiving a mcPathEst message S4 to make the requesting node 210 and the selected candidate node 232 virtual neighbor nodes are repeated iteratively until the requesting node 210 is connected to a node that provides the multicast service.
  • Such node is either a source of the multicast service (candidate node of first type) and a node that is a subscriber of the multicast service or it is an intermediate node that forwards the multicast service (candidate node of second type).
  • the requesting node 210 may receive mcNotifyNb messages S2 from different intermediate nodes 222, 224, 226 indicating the same candidate node 232.
  • Candidate node 232 may be a node to be selected by the requesting node 210.
  • the requesting node 210 may send the mcNotifyNbAck message S3 indicating the selected candidate node to only one of the intermediate nodes 222, 224, 226 that indicated the same candidate node 232. This has the advantage that conflicts can be prevented that are caused by the distributed nature of the multicast resource-to-resource protocol.
  • the above procedure is described from the prospective of an intermediate node 224 supporting a requesting node 210 by requesting and receiving a multicast service.
  • intermediate node 224 may receive a mcJoin message SI from one of its neighbor nodes. As described above, each node in the network maintains a local neighbor set of physical or virtual neighbor nodes. In response to receiving the mcJoin message SI, the intermediate node 224 selects one or more candidate nodes 232, 234, 236 according to a rule.
  • the rule may define a priority. In one embodiment, the rule defines three types of candidate nodes. A node is considered as a candidate node of first type if the node is a sources of the multicast service. A node is considered as a candidate node of second type, if the node is already involved in the multicast service.
  • the node Being involved in the multicast service means that the node is a subscriber of the multicast service or it is an intermediate node that forwards the multicast service (also referred to as a forwarding node).
  • the requesting node 210 may receive the multicast service from one of the first or the second types of candidate nodes when it is connected to one of these nodes.
  • a candidate type of first type has a higher priority as a candidate type of the other types.
  • the intermediate node 224 finds in its local neighbor set a node that is a source of the multicast service (candidate node of first type) and a node that is a subscriber of the multicast service or it is an intermediate node that forwards the multicast service (candidate node of second type), the intermediate will select the node that is a source of the multicast service (candidate node of first type).
  • a candidate type of first type has the same priority as a candidate type of second type.
  • a candidate node of first type may selectively have a lower priority that a candidate node of second type but a higher priority than a candidate node of third type. This may be the case if the candidate node of second type has a lower load than the candidate type of first type.
  • a candidate node of third type is a node that is neither the multicast source nor an intermediate node forwarding multicast traffic but a node that can help the requesting node 210 to get closer to a node that provides the multicast service.
  • a candidate node of third type may be the peer node that is observed to have the closest Resource-to-Resource (R2R) protocol ID value to the multicast source, as described above.
  • R2R Resource-to-Resource
  • intermediate node 224 may find out from the local neighbor set that nodes 232, 242 and 236 are neither candidate nodes of the first type nor candidate nodes of the second type. However, node 232 may have an R2R protocol ID that is close to the R2R ID of the source node of the multicast service and is therefore selected by the intermediate node 224 as candidate node.
  • Intermediate node 224 sends the requesting node 210 a mcNotifyNb message S2 indicating the one or more candidate nodes in response to the mcJoin message.
  • the requesting node 210 may receive mcNotifyNb message S2 from all or a subset of its neighbor nodes and selects a selected candidate node.
  • Requesting node 210 sends to the intermediate node which sent the mcNotifyNb message indicating the selected candidate node a mcNotifyNbAck message S3.
  • the selected candidate node may be node 232.
  • intermediate node 224 receives a mcNotifyNbAck message S3 from requesting node 210 indicating to virtually connect requesting node 210 and selected candidate node 232 as virtual neighbor nodes.
  • the intermediate node 224 In response to receiving the mcNotifyNbAck message S3 by the intermediate node 224, the intermediate node 224 sends a first mcPathEst message S4 that is hop-by-hop forwarded via forwarding nodes that will provide the connection between intermediate node 223 and subscriber 210 so as to inform each forwarding node that it is part of the multicast distribution. Moreover, the mcPathEst message also lets the forwarding nodes update a channel table that stores the information of multicast channels and subscribers.
  • Intermediate node 224 sends a second mcPathEst message S5 to the selected candidate that is hop-by-hop forwarded via forwarding nodes that will provide the connection between the selected candidate node 223 and intermediate node 224 so as to inform each forwarding node that it is part of the multicast distribution.
  • the intermediate node (224) joins the two paths such as to virtually connect subscriber 210 and the selected candidate node.
  • Intermediate node 224 is then also a forwarding node for that multicast service inasmuch as it is part of connection path between the requesting node 210 and the selected candidate node 232.
  • the requesting node 210 is connected to a new node 232 that supports the requesting node 210 by requesting and receiving the multicast service.
  • the requesting node 210 will iteratively repeat the steps of sending mcJoin messages SI, receiving mcNotifyNb messages S2, selecting a selected candidate node, sending a mcNotifyNbAck message S3 indicating the selected candidate node and receiving a mcPathEst message S4 to make the requesting node 210 and the selected candidate node 232 virtual neighbor nodes until the requesting node 210 is connected to a node that provides the multicast service.
  • Such node is either a source of the multicast service (candidate node of first type) and a node that is a subscriber of the multicast service or it is an intermediate node that forwards the multicast service (candidate node of second type).
  • intermediate node 224 may send first and second mcPathEst messages S4 and S5 simultaneously.
  • every node 210, 222, 224, 226, 232, 234 and 236 updates its own multicast state to its neighbors by sending mcUpCh messages.
  • Such an update message allows neighbor nodes to collect information of the multicast channels the node involves. This information may comprise whether or not the node is a multicast source, a subscriber and/or an intermediate node forwarding multicast traffic of a multicast service.
  • the mcUpCh message is not flooded across the whole network but is only sent to all or a subset of nodes of the local neighbor set.
  • the local neighbor set is not a static set, any node can decide if it has to increase/shrink the size of its neighbor set.
  • a mcUpCh message may contain the following information: i) Multicast source information and ii) Information of pass-by multicast services.
  • the mcUpCh message is also used to inform the neighbor nodes about a new node that has joined the network.
  • the neighbor nodes receiving the mcUpCh message will update their local neighbor set accordingly.
  • nodes periodically send mcUpCh messages to its neighbors. This has the advantage that network changes can be detected.
  • a node receiving a multicast service but wants to unsubscribe sends a mcLeave message. This message will be hop-by-hop forwarded along the distribution path of the multicast service. At every hop, the respective forwarding node will remove the routing state of the multicast overlay for the leaving subscriber. After that, traffic data of the unsubscribed multicast service will not be forwarded to the leaving node.
  • each multicast service is associated with a multicast service identifier cld to uniquely symbolize the multicast service.
  • the identifier cld will be used by any node that is interested in the multicast service to initiate their mcJoin request.
  • the cld will also be used when creating the multicast overlay and building the routing states on every nodes constituting the multicast overlay.
  • the srcld is an identifier of the R2R protocol.
  • the groupld may be generated in relation to the multicast content. For example, it can be generated by applying a hash function on the topic title of the multicast service. Note that there are many other ways to compose the multicast service ID. The only requirements are that such a multicast service ID can uniquely differentiate multicast services and can be easily identified by subscribers in the network.
  • Fig. 5 depicts an exemplary flow rule composition 500 according to an embodiment of the present invention.
  • Flow rule composition 500 may be present in nodes corresponding to the extended OVS architecture 100.
  • the flow rule composition 500 comprises a flow table 520 that is part of the kernel data path 110 and a channel table 510 that is part of the user space.
  • Channel table 510 comprises for each multicast channel ID of a multicast service a list of subscribers that have subscribed to this multicast service.
  • the channel table 510 is updated by adding an entry if the node acts as a forwarding node or is a subscriber node for the multicast service.
  • the table is updated by deleting an entry in response to receiving a mcLeave message.
  • the RCA 132 translates the channel table into corresponding entries in the flow table 520.
  • flow table 520 of the extended OVS architecture 100 comprises additional list-of-action entries which are specific for multicast operations. If the OVS kernel determines packets that match() a multicast ID, e.g., cld_a or cld_b, the list of actions specific to the multicast cld is executed. The list of actions may comprise instructions for duplicating the multicast packets and sending them to multiple recipients.
  • a multicast ID e.g., cld_a or cld_b
  • Fig. 6 shows an alternative embodiment of flow table 520, which uses generic group actions.
  • Generic means in this context that the group actions are independent from a specific multicast service.
  • the group actions are stored in a separate group table 610.
  • Each entry of group table 610 comprises a group identifier and a list of actions, also referred to as an action bucket. There is at last one action for each subscriber of the multicast service.
  • the multicast service specific list-of-action entries are replaced by a reference to an entry in the group table 610.
  • a computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
  • a suitable medium such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A system comprising a plurality of nodes supporting a multicast resource-to-resource protocol is proposed. The system comprises a first node, the first node is configured to subscribe to a multicast service. The system comprises a plurality of second nodes according to the third aspect of the invention, each of the plurality of second nodes is configured to be iteratively virtually connected to the first node so as to provide a path to a third node that is either a source node of the multicast service or a node that is already involved in the multicast service, the third node and the plurality of second nodes being configured to provide the multicast service to the first node over the path.

Description

TITLE
Methods, nodes and system for a multicast service TECHNICAL FIELD
The present invention generally relates to the field of communication network technology. Particularly, the present invention relates to methods, network nodes and system for a multicast control plane. In particular, the present invention relates to a node for requesting and receiving a multicast service in a network. The present invention further relates method for requesting and receiving a multicast service by a node in a network. The present invention further relates to a node for supporting a multicast service in a network of a plurality of nodes. Moreover, the present invention relates to a method for supporting a multicast service in a network of a plurality of nodes. The present invention further relates to a system that comprises a plurality of nodes supporting a multicast resource-to-resource protocol and computer program.
BACKGROUND Multicast is a method of providing the same data to a plurality of network nodes in a communication network. Multicast is already supported in many existing communication systems. In fact, there are many IP multicast protocols proposed, standardized and deployed. In the following the notation (S, G) represents a multicast group where S refers to the IP address of the multicast source, and G refers to the particular multicast group IP address.
Protocol independent multicast (PIM), originally proposed by Cisco, is widely accepted as a standard way for IP multicast. There are three variants of this multicast protocol available.
The first variant is PIM dense mode (PIM-DM), which is in more detail described in: J.
Nicholas, A. Adams and W. Siadak, Protocol Independent Multicast - Dense Mode (PIM-DM): Protocol Specification (Revised), RFC Editor, 2005. PIM-DM assumes that almost all possible subnets have at least one receiver wanting to receive the multicast traffic from the source, so the network is flooded with traffic on all possible branches, then pruned back when branches do not express an interest in receiving the packets. PIM-DM allows a routing device to use any unicast routing protocol and performs reverse path forwarding (RPF) checks using the unicast routing table. PIM-DM has an implicit join message, so routing devices use the flood-and-prune method to deliver traffic everywhere and then determine where the uninterested receivers are. PIM-DM uses source-based distribution trees in the form (S, G).
The second variant is PIM sparse mode (PIM-SM), which is further described in: B. Fenner,
M. J. Handley, H. Holbrook, I. Kouvelas, R. Parekh, Z. Zhang and L. Zheng, Protocol
Independent Multicast - Sparse Mode (PIM-SM): Protocol Specification (Revised), RFC Editor, 2016. PIM-SM assumes that very few of the possible receivers want packets from each source, so the network establishes and sends packets only on branches that have at least one leaf indicating (by message) an interest in the traffic. PIM-SM has an explicit join message, so routing devices determine where the interested receivers are and send join messages upstream to their neighbors, building trees from receivers to the rendezvous point (RP) where an RP routing device is used as the initial source of multicast group traffic and therefore builds distribution trees in a wildcard form (*, G) where the asterisk (*) indicates that the state applies to any multicast source sending to group G.
The third variant is PIM source-specific multicast (SSM). PIM-SSM enhances PIM-SM allowing a client to receive multicast traffic directly from the source, without the help of an RP. Used with IGMPv3 to create a shortest-path tree between receiver and source.
PIM protocols are ancestored from Distance Vector Multicast Routing Protocol (DVMRP) as found in D. W. S. Deering and C. Partridge, "Distance vector multicast routing protocol, nov 1988," RFC1075 and Multicast Open Shortest Path First (Multicast OSPF, MOSPF) as found in: J. Mog, "Multicast routing extensions to OSPF," RFC1584, 1994. DVMRP is a dense-mode- only protocol and uses the flood-and-prune or implicit join method to deliver traffic everywhere and then determine where the uninterested receivers are. DVMRP uses source- based distribution trees in the form (S, G), and builds its own multicast routing tables for reverse path forwarding (RPF) checks. Due to a number of limitations, DVMRP is unattractive for large-scale internet use. MOSPF adds an explicit join message in the original OSPF, so routing devices do not have to flood their entire domain with multicast traffic from every source. MOSPF uses source-based distribution trees in the form (S, G) as well.
Most of multicast protocols use join/leave messages according to IGMPv3 as found in B.
Cain, S. Deering, I. Kouvelas, B. Fenner and A. Thyagarajan, "RFC3376," Internet Group Management Protocol, Version, vol. 3, 2002. Therefore, multicast protocols rely on IGMP, which has to be enabled in configuration.
Existing multicast protocols follow a layered design principle, where interconnecting is provided by a routing protocol underneath and a distribution tree for a multicast group is built on top of the former. Such a kind of solutions may fit uncritical communications, which do not ask for stringent reliability requirements. However, the multi-layer design may not fully meet the requirements for CP NF communications as explained below.
The current multicast group addressing scheme (i.e. the multicast IP address identifying a multicast group) is inflexible. Each multicast group will be assigned its multicast IP address from the range of 224.0.0.0 to 239.255.255.255. On the one hand, any receiver has to know the IP address of the multicast group in advance so as to initialize a request to join in the group. This assumes an additional discovery mechanism in the system. On the other hand, allocations and collections of multicast group addresses have to be carefully managed.
Furthermore, such the address space for IP multicast is always exhaustible. This again explains why an address management mechanism is necessary.
Moreover, though building a multicast distribution overlay on top of the networking layer makes the multicast protocol independent to the choice of a routing protocol, this limits the path diversity when creating the multicast distribution overlay. When a routing protocol is running, it periodically collects the network topology information and figures out the next- hop information to reach every other nodes in the network. For example, OSPF (Multicast Open Shortest Path First) collects information of the network topology and calculates 1-to-N shortest paths. Every routing table entry tells the next-hop information of the shortest path toward a specific network segment. Therefore, for a multicast protocol at the upper layer, there is only one way that a receiver can send its request for joining in a multicast group. Although the route given by the routing protocol is the best path to the multicast source, it is not necessarily the best path to reach the multicast distribution overlay. For instance, there could be another node nearby already being a forwarding node in the multicast overlay and the distance to the receiver is much closer than the distance to the multicast source, but the receiver does not know that. Therefore, multi-layer design blocks the path diversity when creating the multicast distribution overlay since other possible ports are simply ruled out when calculating unicast routing overlay. This obviously delays the multicast overlay construction.
The multi-layer protocol design also causes longer recovery time because the multicast distribution overlay can be recovered only if the underlying routing overlay recovers back. This will be amplified by using link-state routing protocols that need to propagate network changes across the whole network domain. Waiting for the re-convergence of the routing protocol further delays the recovery of multicast overlay that strictly relies on the routing information. The root reason is also the lack of path diversity for the routing layer itself. Clearly, once the path is down or invalid, a node has to figure another path but with global network information, which take time to converge.
Considering the disadvantages and problems discussed above, a desirable multicast solution should preferably have the following features. First of all, the multicast addressing should be flexible and informational. A multicast protocol should let any interested host know the multicast group address easily, which is tightly related to the interests of the multicast group. Furthermore, the multicast protocol should have abundant path options when creating the multicast distribution overlay. In other words, any host/receiver should be able to quickly join in a multicast group. The multicast distribution overlay should also be resilient to network churn. For the latter two goals, this calls for a new routing protocol design considering both unicast and multicast.
SUMMARY
Having recognized the above-mentioned disadvantages and problems, the present invention aims to improve the state of the art. In particular, an object of the present invention is to provide a method and a node for requesting a multicast service, a method and a node for supporting a multicast service and a system for requesting and supporting a multicast service.
The above-mentioned object is achieved by the features of the independent claims. Further embodiments of the invention are apparent from the dependent claims, the description and the figures.
According to a first aspect, the invention relates to a node for requesting and receiving a multicast service in a network comprising a plurality of nodes in accordance to a multicast resource-to-resource protocol. The node having a local neighbor set comprises one or more neighbor nodes of the node. The node is configured to:
- send a first message for requesting the multicast service to all or a subset of the
nodes in the local neighbor set;
- receive, in response to the first message, a second message from at least one node of the local neighbor set each second message indicating one or more candidate nodes that are adapted to support the node in requesting and receiving the multicast service;
- select, according to a rule, a candidate node among the candidate nodes indicated in the received second message(s) to become a virtual neighbor node;
- send, to the node which sent the second message indicating the selected candidate node, a third message for virtually connecting the node and the selected candidate node as virtual neighbor nodes.
Thereby, a node requesting a multicast service informs a subset of its neighbor nodes about the request and receives a number of node IDs that either provide or lead to a node that provides the requested service. This has the advantage that only local information is used and abundant path options are available when creating the multicast distribution overlay. Therefore, any host/receiver is be able to quickly join in a multicast group. The multicast distribution overlay is also be resilient to network churn. According to an implementation of the first aspect, the node is further configured to receive a fourth message in response to the third message and adding the selected candidate node to the local neighbor set.
Thereby, the local neighbor set is updated with a new virtual node. This has the advantage that the candidate node will be considered in further iteration steps.
According to an implementation of the first aspect, the node is further configured to iteratively repeat the above steps.
Thereby, the requesting node iteratively connects to a node that provides the requested multicast service. This has the advantage that no global routing information is required.
According to an implementation of the first aspect, wherein the rule to select the candidate node comprises selecting i) a node that is the source of multicast service, ii) a node that is already involved in the multicast service, or iii) a node having an ID close to the ID of a multicast source.
Thereby, a rule for selecting a best candidate node is defined. This has the advantage that the method is self-stabilizing.
According to an implementation of the first aspect, the rule for selecting the candidate node further comprises taking into account that same candidate information is coming from different one or more neighbor nodes.
Thereby, conflicts can be detected. This has the advantage that the detected conflicts can be resolved during the iterative process.
According to an implementation of the first aspect, the node is further configured to receive sixth messages from one or more neighbor nodes and updating the local neighbor set based on the sixth messages.
Thereby, the local neighbor set is updated. This has the advantage that information required for decentralized routing is collected and maintained.
According to an implementation of the first aspect, the node is further configured to send sixth messages to one or more physical or virtual neighbor nodes periodically or when connecting to a node not being in the local neighbor set or when disconnecting to a node being in the local neighbor set.
Thereby, local neighbor sets of the neighbor are updated and network changes are detected periodically. This has the advantage that information required for decentralized routing is collected and maintained.
According to an implementation of the first aspect, the multicast service is identified by a multicast ID, the multicast ID comprising a source ID and a group ID.
Thereby, a particular multicast service can be identified. This has the advantage that the multicast addressing is flexible and informational and lets any interested host know the multicast group address easily, which is tightly related to the interests of the multicast group. According to an implementation of the first aspect, the node is further configured to receive fourth or fifth messages containing subscriber information and a particular multicast ID and updating an entry in a local channel table based on the subscriber information and the particular multicast ID.
Thereby, the forwarding information in each node that forwards the multicast service is added to a local channel table. This has the advantage that no global routing information is required.
According to an implementation of the first aspect, the node is further configured to receive a seventh message comprising a subscriber ID and the multicast ID, and deleting an entry in the local channel table based on the subscriber ID and the multicast ID.
Thereby, the forwarding information in each node that forwards the multicast service is deleted. This has the advantage that no global routing information is required.
According to a second aspect of the invention, a method for requesting and receiving a multicast service by a node in a network comprising a plurality of nodes in accordance to a multicast resource-to-resource protocol. The method comprises the following steps:
- maintaining a local neighbor set comprising one or more neighbor nodes of the node, sending a first message for requesting the multicast service to all or a subset of the nodes in the local neighbor set,
- receiving, in response to the first message, a second message from at least one node of the local neighbor set, each second message indicating one or more candidate nodes that are adapted to support the node (210) in requesting and receiving the multicast service;
- selecting, according to a rule, a candidate node among the candidate nodes to
become a virtual neighbor node;
- sending, to the node which sent the second message indicating the selected
candidate node, a third message (S3) for virtually connecting the node and the selected candidate node as virtual neighbor nodes.
Thereby, a node requesting a multicast service informs a subset of its neighbor nodes about the request and receives a number of node IDs that either provide or lead to a node that provides the requested service. This has the advantage that only local information is used and abundant path options are available when creating the multicast distribution overlay. Therefore, any host/receiver is be able to quickly join in a multicast group. The multicast distribution overlay is also be resilient to network churn. According to an implementation of the second aspect, the method comprises receiving a fourth message in response to the third message and adding the selected candidate node to the local neighbor set.
Thereby, the local neighbor set is updated with a new virtual node. This has the advantage that the candidate node will be considered in further iteration steps. According to an implementation of the second aspect, the method comprises iteratively repeating the previous steps.
Thereby, the requesting node iteratively connects to a node that provides the requested multicast service. This has the advantage that no global routing information is required.
According to an implementation of the second aspect, the rule for selecting the candidate node comprises selecting i) a node that is the source of multicast service, ii) a node that is already involved in the multicast service, or iii) a node having an ID close to the ID of a multicast source.
Thereby, a rule for selecting a best candidate node is defined. This has the advantage that the routing path finding is self-stabilizing. According to an implementation of the second aspect, the rule for selecting the candidate node further comprises taking into account that same candidate information is coming from different one or more neighbor nodes.
Thereby, conflicts can be detected. This has the advantage that the detected conflicts can be resolved during the iterative process. According to an implementation of the second aspect, the node is further configured to receive sixth messages from one or more neighbor nodes and updating the local neighbor set based on the sixth messages. Thereby, the local neighbor set is updated. This has the advantage that information required for decentralized routing is collected and maintained.
According to an implementation of the second aspect, the node is further configured to send sixth messages to one or more physical or virtual neighbor nodes periodically or when connecting to a node not being in the local neighbor set or when disconnecting to a node being in the local neighbor set.
Thereby, local neighbor sets of the neighbor are updated and network changes are detected periodically. This has the advantage that information required for decentralized routing is collected and maintained.
According to an implementation of the second aspect, the multicast service is identified by a multicast ID, the multicast ID comprising a source ID and a group ID.
Thereby, a particular multicast service can be identified. This has the advantage that the multicast addressing is flexible and informational and lets any interested host know the multicast group address easily, which is tightly related to the interests of the multicast group.
According to an implementation of the second aspect, the node is further configured to receive fourth or fifth messages containing subscriber information and a particular multicast ID and updating an entry in a local channel table based on the subscriber information and the particular multicast ID.
Thereby, the forwarding information in each node that forwards the multicast service is added to a local channel table. This has the advantage that no global routing information is required.
According to an implementation of the second aspect, the node is further configured to receive a seventh message comprising a subscriber ID and the multicast ID, and deleting an entry in the local channel table based on the subscriber ID and the multicast ID.
Thereby, the forwarding information in each node that forwards the multicast service is deleted. This has the advantage that no global routing information is required. According to a third aspect, the invention relates to a node for supporting a multicast service in a network of a plurality of nodes in accordance to a multicast resource-to-resource protocol. The node has a local neighbor set comprising one or more neighbor nodes of the node. The node is configured to do the following:
- receive a first message from a neighbor node in the local neighbor set requesting a multicast service,
- select, according to a rule, one or more candidate nodes among all or a subset of the local neighbor set that are adapted to support the neighbor node in requesting and receiving the multicast service,
- send, in response to the first message, a second message to the neighbor node
indicating the selected one or more candidate nodes;
- receive a third message from the neighbor node indicating a candidate node to
connect to;
- send, in response to the third message, a fourth message to the neighbor node and a fifth message to the candidate node to connect to for virtually connecting the neighbor node and the candidate node to connect to as virtual neighbor nodes.
Thereby, a request for a multicast service is handled by neighbor nodes iteratively. This has the advantage that no global routing information is required for connecting the requesting node to a node that can provide the multicast service. Moreover, path establishment is a dynamic process, which can be repeated at any time to take network changes into account. As further advantage, abundant path options are available when creating the multicast distribution overlay. Therefore, any host/receiver is be able to quickly join in a multicast group.
According to an implementation of the third aspect, the rule for selecting the one or more candidate nodes comprises selecting i) a node that is the source of multicast service, ii) a node that is already involved in the multicast service, or iii) a node having an ID close to the ID of a multicast source.
Thereby, a rule for selecting a best candidate node is defined. This has the advantage that the routing path finding is self-stabilizing. According to an implementation of the third aspect, the node is further configured to receive sixth messages from one or more neighbor nodes and updating the local neighbor set based on the sixth messages.
Thereby, the local neighbor set is updated. This has the advantage that information required for decentralized routing is collected and maintained.
According to an implementation of the third aspect, the node is further configured to send sixth messages to one or more physical or virtual neighbor nodes periodically or when connecting to a node not being in the local neighbor set or when disconnecting to a node being in the local neighbor set.
Thereby, local neighbor sets of the neighbor are updated and network changes are detected periodically. This has the advantage that information required for decentralized routing is collected and maintained.
According to an implementation of the third aspect, the multicast service is identified by a multicast ID, the multicast ID comprises a source ID and a group ID.
Thereby, a particular multicast service can be identified. This has the advantage that the multicast addressing is flexible and informational and lets any interested host know the multicast group address easily, which is tightly related to the interests of the multicast group. According to an implementation of the third aspect, the node is further configured to receive fourth or fifth messages containing subscriber information and a particular multicast ID and updating an entry in a local channel table based on the subscriber information and the particular multicast ID.
Thereby, the forwarding information in each node that forwards the multicast service is added to a local channel table. This has the advantage that no global routing information is required.
According to an implementation of the third aspect, the node is further configured to receive a seventh message comprising a subscriber ID and the multicast ID, and deleting an entry in the local channel table based on the subscriber ID and the multicast ID.
Thereby, the forwarding information in each node that forwards the multicast service is deleted. This has the advantage that no global routing information is required. According to a fourth aspect of the invention, a method is provided for supporting a multicast service in a network of a plurality of nodes in accordance to a multicast resource- to-resource protocol. The method comprises the following steps:
- maintaining a local neighbor set of one or more neighbor nodes of the node;
- receiving a first message from a neighbor node in the local neighbor set requesting a multicast service;
- selecting, according to a rule, one or more candidate nodes among all or a subset of the local neighbor set that are adapted to support the neighbor node in requesting and receiving the multicast service;
- sending, in response to the first message, a second message to the neighbor node indicating the selected one or more candidate nodes;
- receiving a third message from the neighbor node indicating a candidate node to connect to;
- sending, in response to the third message, a fourth message to the neighbor node and a fifth message to the candidate node to connect to for virtually connecting the neighbor node and the candidate node to connect to as virtual neighbor nodes.
Thereby, a request for a multicast service is handled by neighbor nodes iteratively. This has the advantage that no global routing information is required for connecting the requesting node to a node that can provide the multicast service. Moreover, path establishment is a dynamic process, which can be repeated at any time to take network changes into account. As further advantage, abundant path options are available when creating the multicast distribution overlay. Therefore, any host/receiver is be able to quickly join in a multicast group.
According to an implementation of the fourth aspect, the rule for selecting the one or more candidate nodes comprises selecting i) a node that is the source of multicast service, ii) a node that is already involved in the multicast service, or iii) a node having an ID close to the ID of a multicast source.
Thereby, a rule for selecting a best candidate node is defined. This has the advantage that the routing path finding is self-stabilizing. According to an implementation of the fourth aspect, the method further comprises receiving sixth messages from one or more neighbor nodes and updating the local neighbor set based on the sixth messages.
Thereby, the local neighbor set is updated. This has the advantage that information required for decentralized routing is collected and maintained.
According to an implementation of the fourth aspect, the method further comprises sending sixth messages to one or more physical or virtual neighbor nodes periodically or when connecting to a node not being in the local neighbor set or when disconnecting to a node being in the local neighbor set.
Thereby, local neighbor sets of the neighbor are updated and network changes are detected periodically. This has the advantage that information required for decentralized routing is collected and maintained.
According to an implementation of the fourth aspect, the multicast service is identified by a multicast ID, the multicast ID comprises a source ID and a group ID.
Thereby, a particular multicast service can be identified. This has the advantage that the multicast addressing is flexible and informational and lets any interested host know the multicast group address easily, which is tightly related to the interests of the multicast group.
According to an implementation of the fourth aspect, the method comprises receiving fourth or fifth messages containing subscriber information and a particular multicast ID and updating an entry in a local channel table based on the subscriber information and the particular multicast ID.
Thereby, the forwarding information in each node that forwards the multicast service is added to a local channel table. This has the advantage that no global routing information is required.
According to an implementation of the fourth aspect, the method further comprises receiving a seventh message comprising a subscriber ID and the multicast ID, and deleting an entry in the local channel table based on the subscriber ID and the multicast ID. Thereby, the forwarding information in each node that forwards the multicast service is deleted. This has the advantage that no global routing information is required.
According to a fifth aspect, the invention provides a system comprising a plurality of nodes supporting a multicast resource-to-resource protocol. The system comprises a first node according to the first aspect of the invention, the first node is configured to subscribe to a multicast service. The system comprises a plurality of second nodes according to the third aspect of the invention, each of the plurality of second nodes is configured to be iteratively virtually connected to the first node so as to provide a path to a third node that is either a source node of the multicast service or a node that is already involved in the multicast service, the third node and the plurality of second nodes are configured to provide the multicast service to the first node over the path.
Thereby, a complete system is defined that includes the functionality of both the requesting node and the intermediate node. This has the advantage that a multicast service is routed to a requesting node without requiring global routing information. It further leads to a self- stabilizing connection path. As further advantage, abundant path options are available when creating the multicast distribution overlay. Therefore, any host/receiver is be able to quickly join in a multicast group. The multicast distribution overlay is also be resilient to network churn.
According to a sixth aspect, the invention relates to computer program having a program code for performing the method according to the second or fifth aspect, when the computer program runs on a computing device.
Thereby, the method can be performed in an automatic and repeatable manner.
Advantageously, the computer program can be respectively performed by the network entity according to the first aspect or by the user equipment according to the fourth aspect.
More specifically, it should be noted that the above apparatuses may be implemented based on a discrete hardware circuitry with discrete hardware components, integrated chips or arrangements of chip modules, or based on a signal processing device or chip controlled by a software routine or program stored in a memory, written on a computer-readable medium or downloaded from a network such as the internet. It shall further be understood that a preferred embodiment of the invention can also be any combination of the dependent claims or above embodiments with the respective
independent claim.
These and other aspects of the invention will be apparent and elucidated with reference to the embodiments described hereinafter.
BRIEF DESCRIPTION OF THE DRAWINGS
The above aspects and implementation forms of the present invention will be explained in the following description of specific embodiments in relation to the enclosed drawings, in which Fig. 1 shows an extended OVS node architecture block diagram according to an embodiment of the invention.
Fig. 2 shows an exemplary network and message exchange according to an embodiment of the invention.
Fig. 3 shows further message exchanges in the network according to the embodiment of Fig. 2.
Fig. 4 shows a local neighbor set of a node after successful message exchanges according to an embodiment of the invention.
Fig. 5 shows a flow rule composition of an extended OVS node according to an embodiment of the invention. Fig. 6 shows an alternative flow rule composition of an extended OVS node according to an embodiment of the invention.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
Fig. 1 shows an extended Open VSwitch (OVS) architecture 100 for supporting control plane multicast according to an embodiment of the present invention. OVS is a multilayer virtual switch designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols. The extended OVS node architecture 100 consists of two main parts: a kernel part and a userspace part, which are graphically separated by the dotted line. The kernel part comprises a kernel datapath module 110, which implements a packet forwarding engine responsible for per-packet lookup, modification and forwarding. Compared to existing OVS nodes, the OVS kernel according to the embodiment of the present invention is further extended to support control plane (CP) network function (NF) multicast communications. The userspace part comprises an ovsdb-server 120, which is responsible for storing information about the configuration of the switch. The userspace part further comprises an ovs-vswitchd module 130 as a local daemon, which can modify the kernel part by modifying, for example, the flow rule entries in a flow table. It should be noted that, in a generalization context with respect to the specific software-defined networking (SDN) scenario, the flow table could then be a normal routing table comprising a set of data routing rules to be applied to the incoming data packets and a set of control routing rules to be applied to the incoming control packets. In addition, the local daemon also exposes the external interface in order to allow the extended OVS node to communicate with a remote SDN controller through the depicted control port.
As shown in Fig. 1, extended OVS node architecture 100 further comprises a resource control agent (RCA) module 132, which is locally implemented inside the ovs-vswitchd module 130. RCA module 132 extends the functionalities of the local daemon (i.e., the ovs- vswitchd module 130) and allows the extended OVS node to support a Resource-to- Resource multicast (R2Rm) protocol as defined by the present invention. The local daemon has been chosen because it has already been used successfully as a local agent responsible for many basic control tasks such as implementing the forwarding logic including media access control (MAC) learning, load balancing over bonded interfaces and communicating with the external SDN controller using the OpenFlow protocol. Thus, with minimum effort, the extended local daemon can support the multicast routing of the present invention.
In order to enable multicast routing, the proposed R2Rm protocol utilizes different messages, comprising:
- mcJoin: A message that a node shows its interest to a multicast channel;
- mcNotifyNb: A message that a node sends to notify another node a candidate of helping to join in a multicast service; - mcNotifyNbAck: A message to inform a node to establish a virtual connection between a node and the candidate;
- mcPathEst: A message that a node sends to construct the path of a multicast overlay;
- mcUpCh: A message that a node shows its own multicast service information to
another node;
- mcLeave: A message that a node quits a multicast channel.
Fig. 2 shows an exemplary network comprising nodes 210, 222, 224, 226, 232, 234 and 236 according to an embodiment of the present invention. The nodes may be the extended OVS node within a SDN scenario as described above. It should be noted that the network of Fig. 2 is for illustration purpose only and the network may contain more or less nodes and may have different connections between the nodes.
Each node 210, 222, 224, 226, 232, 234 and 236 of the network has a connection to one or more neighbor nodes. As depicted in the exemplary network of Fig. 2, node 210 is connected to nodes 222, 224 and 226. The connections may be physical connections or virtual connections. In case of a physical connection, the nodes are directly connected by e.g. a cable. In case of virtual connections, the connection is established via one or more intermediate nodes.
Each node maintains a local neighbor set, which comprises the physical or virtual neighbors of the node. The local neighbor set may be a data structure stored in the resource control agent (RCA) module 132 of Fig. 1. In the exemplary network of Fig. 2, the local neighbor set 211 of node 210 comprises nodes 222, 224 and 226. Moreover, the local neighbor set 225 of node 224 comprises nodes 210, 232, 234 and 236.
In one aspect of the invention, node 210 may be a node requesting a multicast service. Such node is also referred to as a subscriber or a requesting node. It has to be noted that each node in the network can be a requesting node 210 for a multicast service. Joining a multicast service requires the requesting node 210 to explicitly initialize a request. This operation is done by the requesting node 210 sending a mcJoin message SI to its neighbor nodes. As described above, the neighbor nodes are organized in the local neighbor set that is maintained by each node. The requesting node can selectively send the mcJoin message SI to all neighbor nodes or only to a subset of the neighbor nodes of the local neighbor set. In one embodiment, a requesting node only sends mcJoin messages to its physical neighbors, i.e. the neighbors to whom it has physical connections. In an alternative embodiment, the subscriber can broadcast its interest to a multicast channel to both physical and virtual neighbors. It has to be noted that sending a mcJoin message does not require any convergence of the routing overlay but connections to some neighbors, even if the connections are direct links.
In the embodiment of Fig.2, the requesting node 210 sends the mcJoin message SI to node 224. Node 224 is referred to as an intermediate node, because it is considered to support the requesting node by requesting and receiving the multicast service.
The requesting node 210 receives in response to the mcJoin message SI a mcNotifyNb message S2 from its neighbor node(s). Each mcNotifyNb message S2 indicates one or more candidate nodes that have been selected by the neighbor node(s). The requesting node 210 selects a selected candidate node 232 among the indicated candidate nodes that is adapted to support the subscriber in requesting and receiving the multicast service. The selection may be based on a rule. The rule may define a priority. In one embodiment, the rule defines three types of candidate nodes. A node is considered as a candidate node of first type if the node is a source of the multicast service. A node is considered as a candidate node of second type, if the node is already involved in the multicast service. Being involved in the multicast service means that the node is a subscriber of the multicast service or is an intermediate node that forwards the multicast service (also referred to as a forwarding node). The requesting node may receive the multicast service from one of the first or the second types of candidate nodes. In one embodiment, a candidate type of first type has a higher priority as a candidate type of the other types. Thus, if the requesting node 210 receives mcNotifyNb messages S2 from its neighbor node(s) indicating a node that is a source of the multicast service (candidate node of first type) and a node that is a subscriber of the multicast service or it is an intermediate node that forwards the multicast service (candidate node of second type), the requesting node 210 will select the node that is a source of the multicast service (candidate node of first type). In one embodiment, a candidate node of first type has the same priority as a candidate type of second type. In another embodiment, a candidate node of first type may selectively have a lower priority that a candidate node of second type but a higher priority than a candidate node of third type. This may be the case if the candidate node of second type is determined to have, e.g., a lower load than the candidate type of first type.
A candidate node of third type is a node that is neither the multicast source nor an intermediate node forwarding multicast traffic but a node that can help the requesting node to get closer to a node that provides the multicast service. In one embodiment, such a candidate node of third type may be the peer node that is observed to have the closest Resource-to-Resource (R2R) protocol ID value to the multicast source. The R2R protocol, as found in PCT EP 2017/053042, uses distinct unsigned integers to identify all network nodes (i.e. assigning to each of them an integer ID) and builds an ordered ring-based structured routing overlay based on the node ID values. Every node will have connections/paths to its logical neighbors, i.e. those nodes whose ID values are adjacent in the logical ID space. Optionally, every node can also have connections/paths to some finger nodes whose ID values are remote and across the logical ring structure. Thereby, the R2R protocol creates a logical routing overlay for routing unicast traffic over the network.
Using the above described rule has the advantage that finding a routing path is a self- stabilizing operation.
In the embodiment of Fig. 2, the requesting node 210 receives in response to the mcJoin message SI a mcNotifyNb message S2 from the intermediate node 224 indicating candidate node 232. Candidate node 232 may be a candidate node of third type. Inasmuch as there are neither candidate nodes of first type nor candidate nodes of second type indicated by other neighbor nodes, the requesting node 210 may select node 232 as selected candidate node.
In response to receiving mcNotifyNb messages S2 and selecting a selected candidate node 232, the requesting node 210 sends a mcNotifyNbAck message S3 to the intermediate node 224 which has sent the mcNotifyNb message indicating the selected candidate node. The mcNotifyNbAck message S3 is for instruction the intermediate node 224 to virtually connect the subscriber node and the selected candidate node as virtual neighbor nodes.
By virtually connecting to the selected candidate node, the requesting node 210 extends its local neighbor set by other nodes that supports the requesting node 210 by requesting and receiving the multicast service. As described above, a node supporting the requesting node may be a source of the multicast service, a node that is involved in the multicast service or a node that helps the requesting node 210 to get closer to a node that provides the multicast service. This has the advantage that global routing information is not required. Instead, only local neighbor information is used.
In the embodiment of Fig. 2, which is continued in Fig. 3, the requesting node 210 sends the mcNotifyNbAck S3 message to intermediate node 224 inasmuch as node 232 has been selected by the requesting node 210 as selected candidate node and node 232 was indicated in the mcNotifyNb message S2 from intermediate node 224.
In one embodiment, the requesting node 210 receives a mcPathEst message S4 from the immediate node 224 indicating that the requesting node 210 and the selected candidate node 232 are virtual neighbor nodes. The mcPathEst message S4 is hop-by-hop forwarded via forwarding nodes that will provide the connection between intermediate node 223 and subscriber 210 so as to inform each forwarding node that it is part of the multicast distribution. Moreover, the mcPathEst message also lets the forwarding nodes update its channel table that stores information about multicast services and associated subscribers. It is noted that intermediate node 224 sends a further mcPathEst message S5 to the selected candidate that is hop-by-hop forwarded via forwarding nodes that will provide the connection between the selected candidate node 232 and intermediate node 224 so as to inform each forwarding node that it is part of the multicast distribution. Intermediate node 224 finally joins the path from the requesting node 210 and the path from the selected candidate node 232 such as to virtually connect subscriber 210 and the selected candidate node. Intermediate node 224 is then also a forwarding node for that multicast service inasmuch as it is part of connection path between the requesting node 210 and the selected candidate node 232.
In one embodiment, the steps of sending mcJoin messages SI, receiving mcNotifyNb messages S2, selecting a selected candidate node, sending a mcNotifyNbAck message S3 indicating the selected candidate node and receiving a mcPathEst message S4 to make the requesting node 210 and the selected candidate node 232 virtual neighbor nodes are repeated iteratively until the requesting node 210 is connected to a node that provides the multicast service. Such node is either a source of the multicast service (candidate node of first type) and a node that is a subscriber of the multicast service or it is an intermediate node that forwards the multicast service (candidate node of second type).
In one embodiment, the requesting node 210 may receive mcNotifyNb messages S2 from different intermediate nodes 222, 224, 226 indicating the same candidate node 232.
Candidate node 232 may be a node to be selected by the requesting node 210. The requesting node 210 may send the mcNotifyNbAck message S3 indicating the selected candidate node to only one of the intermediate nodes 222, 224, 226 that indicated the same candidate node 232. This has the advantage that conflicts can be prevented that are caused by the distributed nature of the multicast resource-to-resource protocol.
In another aspect of the invention, the above procedure is described from the prospective of an intermediate node 224 supporting a requesting node 210 by requesting and receiving a multicast service.
Going back to Fig. 2, intermediate node 224 may receive a mcJoin message SI from one of its neighbor nodes. As described above, each node in the network maintains a local neighbor set of physical or virtual neighbor nodes. In response to receiving the mcJoin message SI, the intermediate node 224 selects one or more candidate nodes 232, 234, 236 according to a rule. The rule may define a priority. In one embodiment, the rule defines three types of candidate nodes. A node is considered as a candidate node of first type if the node is a sources of the multicast service. A node is considered as a candidate node of second type, if the node is already involved in the multicast service. Being involved in the multicast service means that the node is a subscriber of the multicast service or it is an intermediate node that forwards the multicast service (also referred to as a forwarding node). The requesting node 210 may receive the multicast service from one of the first or the second types of candidate nodes when it is connected to one of these nodes. In one embodiment, a candidate type of first type has a higher priority as a candidate type of the other types.
Thus, when the intermediate node 224 finds in its local neighbor set a node that is a source of the multicast service (candidate node of first type) and a node that is a subscriber of the multicast service or it is an intermediate node that forwards the multicast service (candidate node of second type), the intermediate will select the node that is a source of the multicast service (candidate node of first type). In one embodiment, a candidate type of first type has the same priority as a candidate type of second type. In another embodiment, a candidate node of first type may selectively have a lower priority that a candidate node of second type but a higher priority than a candidate node of third type. This may be the case if the candidate node of second type has a lower load than the candidate type of first type.
A candidate node of third type is a node that is neither the multicast source nor an intermediate node forwarding multicast traffic but a node that can help the requesting node 210 to get closer to a node that provides the multicast service. In one embodiment, such a candidate node of third type may be the peer node that is observed to have the closest Resource-to-Resource (R2R) protocol ID value to the multicast source, as described above.
Using the above described rule has the advantage that finding a routing path is a self- stabilizing operation.
In the embodiment of Fig. 2, intermediate node 224 may find out from the local neighbor set that nodes 232, 242 and 236 are neither candidate nodes of the first type nor candidate nodes of the second type. However, node 232 may have an R2R protocol ID that is close to the R2R ID of the source node of the multicast service and is therefore selected by the intermediate node 224 as candidate node.
Intermediate node 224 sends the requesting node 210 a mcNotifyNb message S2 indicating the one or more candidate nodes in response to the mcJoin message. As described above, the requesting node 210 may receive mcNotifyNb message S2 from all or a subset of its neighbor nodes and selects a selected candidate node. Requesting node 210 sends to the intermediate node which sent the mcNotifyNb message indicating the selected candidate node a mcNotifyNbAck message S3. In the embodiment of Fig. 2, the selected candidate node may be node 232. Thus, intermediate node 224 receives a mcNotifyNbAck message S3 from requesting node 210 indicating to virtually connect requesting node 210 and selected candidate node 232 as virtual neighbor nodes.
In response to receiving the mcNotifyNbAck message S3 by the intermediate node 224, the intermediate node 224 sends a first mcPathEst message S4 that is hop-by-hop forwarded via forwarding nodes that will provide the connection between intermediate node 223 and subscriber 210 so as to inform each forwarding node that it is part of the multicast distribution. Moreover, the mcPathEst message also lets the forwarding nodes update a channel table that stores the information of multicast channels and subscribers. Intermediate node 224 sends a second mcPathEst message S5 to the selected candidate that is hop-by-hop forwarded via forwarding nodes that will provide the connection between the selected candidate node 223 and intermediate node 224 so as to inform each forwarding node that it is part of the multicast distribution. The intermediate node (224) joins the two paths such as to virtually connect subscriber 210 and the selected candidate node.
Intermediate node 224 is then also a forwarding node for that multicast service inasmuch as it is part of connection path between the requesting node 210 and the selected candidate node 232.
As a result, the requesting node 210 is connected to a new node 232 that supports the requesting node 210 by requesting and receiving the multicast service.
In one embodiment, the requesting node 210 will iteratively repeat the steps of sending mcJoin messages SI, receiving mcNotifyNb messages S2, selecting a selected candidate node, sending a mcNotifyNbAck message S3 indicating the selected candidate node and receiving a mcPathEst message S4 to make the requesting node 210 and the selected candidate node 232 virtual neighbor nodes until the requesting node 210 is connected to a node that provides the multicast service. Such node is either a source of the multicast service (candidate node of first type) and a node that is a subscriber of the multicast service or it is an intermediate node that forwards the multicast service (candidate node of second type).
In one embodiment, intermediate node 224 may send first and second mcPathEst messages S4 and S5 simultaneously.
In one embodiment of the invention, every node 210, 222, 224, 226, 232, 234 and 236 updates its own multicast state to its neighbors by sending mcUpCh messages. Such an update message allows neighbor nodes to collect information of the multicast channels the node involves. This information may comprise whether or not the node is a multicast source, a subscriber and/or an intermediate node forwarding multicast traffic of a multicast service. Note that the mcUpCh message is not flooded across the whole network but is only sent to all or a subset of nodes of the local neighbor set. The local neighbor set is not a static set, any node can decide if it has to increase/shrink the size of its neighbor set. A mcUpCh message may contain the following information: i) Multicast source information and ii) Information of pass-by multicast services. The mcUpCh message is also used to inform the neighbor nodes about a new node that has joined the network. The neighbor nodes receiving the mcUpCh message will update their local neighbor set accordingly. In one embodiment, nodes periodically send mcUpCh messages to its neighbors. This has the advantage that network changes can be detected.
In a further embodiment of the invention, a node receiving a multicast service but wants to unsubscribe sends a mcLeave message. This message will be hop-by-hop forwarded along the distribution path of the multicast service. At every hop, the respective forwarding node will remove the routing state of the multicast overlay for the leaving subscriber. After that, traffic data of the unsubscribed multicast service will not be forwarded to the leaving node.
In an embodiment of the invention, each multicast service is associated with a multicast service identifier cld to uniquely symbolize the multicast service. The identifier cld will be used by any node that is interested in the multicast service to initiate their mcJoin request. The cld will also be used when creating the multicast overlay and building the routing states on every nodes constituting the multicast overlay. A multicast service identifier is formed by an ID pair: cld := <srcld, groupld>, where the srcld is the ID value of the multicast source of the multicast service. In one embodiment, the srcld is an identifier of the R2R protocol. The groupld may be generated in relation to the multicast content. For example, it can be generated by applying a hash function on the topic title of the multicast service. Note that there are many other ways to compose the multicast service ID. The only requirements are that such a multicast service ID can uniquely differentiate multicast services and can be easily identified by subscribers in the network.
Fig. 5 depicts an exemplary flow rule composition 500 according to an embodiment of the present invention. Flow rule composition 500 may be present in nodes corresponding to the extended OVS architecture 100. The flow rule composition 500 comprises a flow table 520 that is part of the kernel data path 110 and a channel table 510 that is part of the user space. Channel table 510 comprises for each multicast channel ID of a multicast service a list of subscribers that have subscribed to this multicast service. As described in context of the mcPathEst messages S4 and S5, the channel table 510 is updated by adding an entry if the node acts as a forwarding node or is a subscriber node for the multicast service. Similarly, the table is updated by deleting an entry in response to receiving a mcLeave message. The RCA 132 translates the channel table into corresponding entries in the flow table 520.
Compared to an existing OVS node, flow table 520 of the extended OVS architecture 100 comprises additional list-of-action entries which are specific for multicast operations. If the OVS kernel determines packets that match() a multicast ID, e.g., cld_a or cld_b, the list of actions specific to the multicast cld is executed. The list of actions may comprise instructions for duplicating the multicast packets and sending them to multiple recipients.
Fig. 6 shows an alternative embodiment of flow table 520, which uses generic group actions. Generic means in this context that the group actions are independent from a specific multicast service. The group actions are stored in a separate group table 610. Each entry of group table 610 comprises a group identifier and a list of actions, also referred to as an action bucket. There is at last one action for each subscriber of the multicast service.
Compared to the flow table as depicted in Fig. 5, the multicast service specific list-of-action entries are replaced by a reference to an entry in the group table 610.
Using list of actions has the advantage that it is simple as long as the number of flow tables and flow rules therein are small. For multicast packets, simple actions of duplication and forwarding is preferred. Therefore, attaching these repeating actions directly to the flow entry simplifies the pipeline mechanism at the node. However, if multicast packets not only involve delivery to the recipients, but also some local modifications and further detailed processing, using group tables has advantages. The action set appended with each multicast packet only contains one action that specifies "goto a particular group table process". This way also simplifies the list of actions that have to be defined per each multicast flow rule entry. Therefore, the whole pipeline can focus on other processing needed by the multicast packets.
While the invention has been illustrated and described in detail in the drawings and the foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive. The invention is not limited to the disclosed embodiments. From reading the present disclosure, other modifications will be apparent to a person skilled in the art. Such modifications may involve other features, which are already known in the art and may be used instead of or in addition to features already described herein.
The invention has been described in conjunction with various embodiments herein.
However, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Although the present invention has been described with reference to specific features and embodiments thereof, it is evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded simply as an illustration of the invention as defined by the appended claims, and are contemplated to cover any and all modifications, variations, combinations or equivalents that fall within the scope of the present invention.

Claims

1 . A node (210) for requesting and receiving a multicast service in a network comprising a plurality of nodes (210, 222, 224, 226, 232, 234, 236) in accordance to a multicast resource- to-resource protocol,
the node (210) having a local neighbor set (21 1 ) comprising one or more neighbor nodes (222, 224, 226) of the node (210),
the node (210) being configured to:
send a first message (S1 ) for requesting the multicast service to all or a subset of the nodes in the local neighbor set (21 1 );
receive, in response to the first message (S1 ), a second message (S2) from at least one node of the local neighbor set (21 1 ), each second message indicating one or more candidate nodes (232, 234, 236) that are adapted to support the node (210) in requesting and receiving the multicast service;
select, according to a rule, a candidate node (232) among the candidate nodes (232, 234, 236) indicated in the received second message(s) (S2) to become a virtual neighbor node; send, to the node which sent the second message indicating the selected candidate node (232), a third message (S3) for virtually connecting the node (210) and the selected candidate node (232) as virtual neighbor nodes.
2. A method for requesting and receiving a multicast service by a node (210) in a network comprising a plurality of nodes (210, 222, 224, 226, 232, 234, 236) in accordance to a multicast resource-to-resource protocol, the method comprising:
maintaining a local neighbor set (21 1 ) comprising one or more neighbor nodes (222, 224, 226) of the node (210);
sending a first message (S1 ) for requesting the multicast service to all or a subset of the nodes in the local neighbor set (21 1 );
receiving, in response to the first message (S1 ), a second message (S2) from at least one node of the local neighbor set (21 1 ), each second message indicating one or more candidate nodes (232, 234, 236) that are adapted to support the node (210) in requesting and receiving the multicast service;
selecting, according to a rule, a candidate node (232) among the candidate nodes (232, 234, 236) to become a virtual neighbor node;
sending, to the node which sent the second message indicating the selected candidate node (232), a third message (S3) for virtually connecting the node (210) and the selected candidate node (232) as virtual neighbor nodes.
3. The method of claim 2, further comprising receiving a fourth (S4) message in response to the third message and adding the selected candidate node to the local neighbor set.
4. The method of claim 3, further comprising iteratively repeating the steps of claims 2 and 3.
5. The method of claim 2 to 4, wherein the rule for selecting the candidate node (232) comprises selecting i) a node that is the source of multicast service, ii) a node that is already involved in the multicast service, or iii) a node having an ID close to the ID of a multicast source.
6. The method of claim 5, wherein rule for selecting the candidate node (232) further comprises taking into account that same candidate information is coming from different one or more neighbor nodes (222, 224, 226).
7. A node (224) for supporting a multicast service in a network of a plurality of nodes (210, 222, 224, 226, 232, 234, 236) in accordance to a multicast resource-to-resource protocol, the node (224) having a local neighbor set (21 1 ) comprising one or more neighbor nodes (222, 224, 226) of the node (224),
the node (224) being configured to:
receive a first message (S1 ) from a neighbor node (210) in the local neighbor set requesting a multicast service;
select, according to a rule, one or more candidate nodes (232, 234, 236) among all or a subset of the local neighbor set (225) that are adapted to support the neighbor node (210) in requesting and receiving the multicast service;
send, in response to the first message (S1 ), a second message (S2) to the neighbor node (210) indicating the selected one or more candidate nodes (232, 234, 236);
receive a third message (S3) from the neighbor node (210) indicating a candidate node (232) to connect to;
send, in response to the third message (S3), a fourth message (S4) to the neighbor node (210) and a fifth message (S5) to the candidate node (232) to connect to for virtually connecting the neighbor node (210) and the candidate node (232) to connect to as virtual neighbor nodes.
8. A method for supporting a multicast service in a network of a plurality of nodes (210, 222, 224, 226, 232, 234, 236) in accordance to a multicast resource-to-resource protocol, the method comprising: maintaining a local neighbor set (225) of one or more neighbor nodes (222, 224, 226) of the node (224);
receiving a first message (S1 ) from a neighbor node (210) in the local neighbor set requesting a multicast service;
selecting, according to a rule, one or more candidate nodes (232, 234, 236) among all or a subset of the local neighbor set (225) that are adapted to support the neighbor node (210) in requesting and receiving the multicast service;
sending, in response to the first message (S1 ), a second message (S2) to the neighbor node (210) indicating the selected one or more candidate nodes (232, 234, 236);
receiving a third message (S3) from the neighbor node (210) indicating a candidate node (S232) to connect to;
sending, in response to the third message (S3), a fourth message (S4) to the neighbor node (210) and a fifth message (S5) to the candidate node (232) to connect to for virtually connecting the neighbor node (210) and the candidate node (232) to connect to as virtual neighbor nodes.
9. The method of claim 8, wherein rule for selecting the one or more candidate node (232, 234, 236) comprises selecting i) a node that is the source of multicast service, ii) a node that is already involved in the multicast service, or iii) a node having an ID close to the ID of a multicast source.
10. The method of claims 2 to 6 or claims 8 to 9, further comprising: receiving sixth messages from one or more neighbor nodes and updating the local neighbor set (21 1 , 225) based on the sixth messages.
1 1 . The method of claims 2 to 6 or claims 8 to 10, further comprising: sending sixth messages to one or more physical or virtual neighbor nodes periodically or when connecting to a node not being in the local neighbor set (21 1 , 225) or when disconnecting to a node being in the local neighbor set (21 1 , 225).
12. The method of claims 2 to 6 or claims 8 to 1 1 , wherein the multicast service is identified by a multicast ID, the multicast ID comprising a source ID and a group ID.
13. The method of claim 12, comprising receiving fourth or fifth messages containing subscriber information and a particular multicast ID and updating an entry in a local channel table based on the subscriber information and the particular multicast ID.
14. The method of claim 13, further comprising receiving a seventh message comprising a subscriber ID and the multicast ID, and deleting an entry in the local channel table based on the subscriber ID and the multicast ID.
15. A system, comprising a plurality of nodes supporting a multicast resource-to-resource protocol, the system comprising:
a first node (210) according to claim 1 , the first node (210) is configured to subscribe to a multicast service;
a plurality of second nodes (224, 232) according to claim 7, each of the plurality of second nodes (224, 232) is configured to be iteratively virtually connected to the first node (210) so as to provide a path to a third node that is either a source node of the multicast service or a node that is already involved in the multicast service, the third node and the plurality of second nodes being configured to provide the multicast service to the first node (210) over the path.
16. Computer program having a program code for performing the method according to any of claims 2 to 6 or claims 8 to 14, when the computer program runs on a computing device.
PCT/EP2018/054648 2018-02-26 2018-02-26 Methods, nodes and system for a multicast service Ceased WO2019161928A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2018/054648 WO2019161928A1 (en) 2018-02-26 2018-02-26 Methods, nodes and system for a multicast service

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2018/054648 WO2019161928A1 (en) 2018-02-26 2018-02-26 Methods, nodes and system for a multicast service

Publications (1)

Publication Number Publication Date
WO2019161928A1 true WO2019161928A1 (en) 2019-08-29

Family

ID=61827672

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2018/054648 Ceased WO2019161928A1 (en) 2018-02-26 2018-02-26 Methods, nodes and system for a multicast service

Country Status (1)

Country Link
WO (1) WO2019161928A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0575279A2 (en) * 1992-06-18 1993-12-22 International Business Machines Corporation Distributed management communications network
US5331637A (en) * 1993-07-30 1994-07-19 Bell Communications Research, Inc. Multicast routing using core based trees
US5355371A (en) * 1982-06-18 1994-10-11 International Business Machines Corp. Multicast communication tree creation and control method and apparatus
EP1875676A1 (en) * 2005-04-25 2008-01-09 Thomson Licensing S.A. Routing protocol for multicast in a meshed network
US20080205394A1 (en) * 2007-02-28 2008-08-28 Deshpande Sachin G Overlay join latency reduction using preferred peer list
US7839850B2 (en) * 2006-01-30 2010-11-23 Juniper Networks, Inc. Forming equal cost multipath multicast distribution structures

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5355371A (en) * 1982-06-18 1994-10-11 International Business Machines Corp. Multicast communication tree creation and control method and apparatus
EP0575279A2 (en) * 1992-06-18 1993-12-22 International Business Machines Corporation Distributed management communications network
US5331637A (en) * 1993-07-30 1994-07-19 Bell Communications Research, Inc. Multicast routing using core based trees
EP1875676A1 (en) * 2005-04-25 2008-01-09 Thomson Licensing S.A. Routing protocol for multicast in a meshed network
US7839850B2 (en) * 2006-01-30 2010-11-23 Juniper Networks, Inc. Forming equal cost multipath multicast distribution structures
US20080205394A1 (en) * 2007-02-28 2008-08-28 Deshpande Sachin G Overlay join latency reduction using preferred peer list

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
AUERBACH J ET AL: "Multicast group membership management in high speed wide area networks", INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS. ARLINGTON, TEXAS, MAY 20 - 24, 19; [PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS], LOS ALAMITOS, IEEE COMP. SOC. PRESS, US, vol. CONF. 11, 20 May 1991 (1991-05-20), pages 231 - 238, XP010023060, ISBN: 978-0-8186-2144-4, DOI: 10.1109/ICDCS.1991.148670 *
B. CAIN; S. DEERING; I. KOUVELAS; B. FENNER; A. THYAGARAJAN: "Internet Group Management Protocol", RFC3376, vol. 3, 2002
B. FENNER; M. J. HANDLEY; H. HOLBROOK; I. KOUVELAS; R. PAREKH; Z. ZHANG; L. ZHENG: "RFC", 2016, article "Protocol Independent Multicast - Sparse Mode (PIM-SM): Protocol Specification (Revised)"
D. W. S. DEERING; C. PARTRIDGE: "Distance vector multicast routing protocol", RFC1075, November 1988 (1988-11-01)
J. MOG: "Multicast routing extensions to OSPF", RFC1584, 1994
J. NICHOLAS; A. ADAMS; W. SIADAK: "RFC", 2005, article "Protocol Independent Multicast - Dense Mode (PIM-DM): Protocol Specification (Revised)"

Similar Documents

Publication Publication Date Title
US11071017B2 (en) Forwarding entry generation method, controller, and network device
EP3767881B1 (en) Maximally redundant trees to redundant multicast source nodes for multicast protection
US8009671B2 (en) Multicast method and multicast routing method
US9338079B2 (en) Method of routing multicast traffic
US9749214B2 (en) Software defined networking (SDN) specific topology information discovery
EP1597875B1 (en) Method and device for protocol-independent realization of ip multicast
US8982881B2 (en) Upstream label allocation on ethernets for MP2MP LSPS
US5361256A (en) Inter-domain multicast routing
US9054956B2 (en) Routing protocols for accommodating nodes with redundant routing facilities
US8570857B2 (en) Resilient IP ring protocol and architecture
US7751394B2 (en) Multicast packet relay device adapted for virtual router
US20090161670A1 (en) Fast multicast convergence at secondary designated router or designated forwarder
CN101808004B (en) Method and system for realizing Anycast-RP mechanism
US9288067B2 (en) Adjacency server for virtual private networks
JP2018191290A (en) Method, apparatus, and network system for realizing load balancing
WO2021143279A1 (en) Method and device for segment routing service processing, routing equipment, and storage medium
WO2017201750A1 (en) Method, device and system for processing multicast data
CN113810274B (en) A routing processing method and related equipment
EP2892196B1 (en) Method, network node and system for implementing point-to-multipoint multicast
US10567180B2 (en) Method for multicast packet transmission in software defined networks
US11018886B1 (en) Methods and apparatus for selectively filtering an IP multicast data stream for selected group members of a multicast group
US10764337B2 (en) Communication system and communication method
WO2019161928A1 (en) Methods, nodes and system for a multicast service
KR100310302B1 (en) Method of multicast label switched path establishment using multicast label in mpls network
CN108512762B (en) Multicast implementation method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18714133

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18714133

Country of ref document: EP

Kind code of ref document: A1