[go: up one dir, main page]

WO2016164769A1 - Data center endpoint network device with built in switch - Google Patents

Data center endpoint network device with built in switch Download PDF

Info

Publication number
WO2016164769A1
WO2016164769A1 PCT/US2016/026714 US2016026714W WO2016164769A1 WO 2016164769 A1 WO2016164769 A1 WO 2016164769A1 US 2016026714 W US2016026714 W US 2016026714W WO 2016164769 A1 WO2016164769 A1 WO 2016164769A1
Authority
WO
WIPO (PCT)
Prior art keywords
transceiver
fiber
connector
network device
port
Prior art date
Application number
PCT/US2016/026714
Other languages
French (fr)
Inventor
Mohammad H. RAZA
David G. STONE
Ronald M. PLANTE
Original Assignee
Fiber Mountain, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fiber Mountain, Inc. filed Critical Fiber Mountain, Inc.
Publication of WO2016164769A1 publication Critical patent/WO2016164769A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0005Switch and router aspects
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B6/00Light guides; Structural details of arrangements comprising light guides and other optical elements, e.g. couplings
    • G02B6/24Coupling light guides
    • G02B6/42Coupling light guides with opto-electronic elements
    • G02B6/4201Packages, e.g. shape, construction, internal or external details
    • G02B6/4246Bidirectionally operating package structures
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B6/00Light guides; Structural details of arrangements comprising light guides and other optical elements, e.g. couplings
    • G02B6/24Coupling light guides
    • G02B6/42Coupling light guides with opto-electronic elements
    • G02B6/4292Coupling light guides with opto-electronic elements the light guide being disconnectable from the opto-electronic element, e.g. mutually self aligning arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0005Switch and router aspects
    • H04Q2011/0037Operation
    • H04Q2011/0047Broadcast; Multicast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0005Switch and router aspects
    • H04Q2011/0052Interconnection of switches
    • H04Q2011/0054Distribute-route

Definitions

  • the present disclosure relates generally to network equipment typically used in data centers, and more particularly to endpoint network devices with increased port density and efficiency.
  • Data center network architectures are generally considered as static configurations, such that once a data center network is built out, the main architecture does not change and there are relatively few changes made to the data center network. This is because each architectural modification or change requires sending personnel to the data center to manually move components or equipment, and/or to change interconnections between the components or equipment within the data center, or to reprogram equipment in the data center. Each architectural modification or change to the data center network incurs cost, sometimes significant cost, increases the risk of errors in the new data center network architecture, and increases the risk of failures resulting from the architectural modification or change.
  • a typical large data center network architecture 10 has network endpoint devices, such as servers 12 and storage devices 20, connected to an aggregation layer of data center network switches 14.
  • Fig. 1 illustrates such an architecture. Smaller data center network architectures may only have a two tier architecture, as opposed to the three tier architecture used in the larger data center networks seen in Fig. 1.
  • the number of tiers (or levels) is dependent upon the number of data center network endpoint devices 12 and 20, and the port capacity of each switch at each level. As a result, the interconnect architecture becomes limited to the number of ports on each switch at each level.
  • the switches used for the aggregation layer can be either switches capable of connecting one server to another server that is directly connected to the data center network switch 14, or the switches 14 may be aggregation type switches where all data traffic is concentrated within one or more switches 14, and then transferred to a distribution layer switch 16 for switching to endpoint destinations.
  • Fig. 2 traditional data center networks, as shown in Fig. 2, consist of servers 104 and storage devices 106, plus connections between the servers, storage devices and to external interfaces.
  • a data center interconnects these devices by means of a switching topology implemented by pathway controlling devices 130, such as switches and routers.
  • pathway controlling devices 130 such as switches and routers.
  • the servers 104 and storage devices 106 connect to one another via cable interfaces 118, 120, 122, and 124.
  • Interconnects 112 are used to bundle and reconfigure cable connections between endpoints in cable bundles 114, 116, and 126. As can be seen in Fig.
  • a typical data center network configuration shown in Fig. 3, consists of multiple rows of cabinets, where each cabinet encloses a rack of one or more network devices, e.g., switches 102, servers 104 and storage devices 106.
  • TOR switch 102 that consolidates data packet traffic in the rack via cables 140 and transports the data packet traffic to a switch known as an end-of-row (EOR) switch 108 via cables 142.
  • the EOR switch is typically larger than a TOR switch, and it processes data packets and switches or routes the data packets to a final destination or to a next stage in the data center network, which in turn may process the data packets for transmission outside the data center network.
  • a TOR switch 102 will switch data packet traffic directly between any two network devices, e.g., servers 104 or storage devices 106, within a given rack. Any data packet traffic destined for locations outside of the rack are sent to the EOR switch 108.
  • the EOR switch 108 will send data packet traffic destined for a network device in a different rack in the same row to the TOR switch 102 of the rack where the network device resides.
  • the TOR switch 102 within the destination rack will then forward the data packet traffic to the intended network device, i.e., the destination device. If the data packet traffic is for network devices outside of the row, e.g., Row 1, the EOR switch 108 will forward the traffic to core switch 110 for further transmission.
  • a TOR switch 102 may be used simply as an aggregator, where the data packet traffic is collected and forwarded to an EOR switch 108.
  • the EOR switch determines the location of the destination network device, and routes the data packet traffic back to the same TOR switch 102 if the data packet traffic is destined for a network device in that rack, to a different TOR switch 102 in a different rack if the traffic is destined for a network device in a different rack in the same row, or to the core switch 110 if the destination of the data packet traffic is outside of that row.
  • the TOR switch 102 may couple the entire data packet traffic from an ingress port to an egress port, or may selectively select individual packets to send to an egress port.
  • a TOR switch 102 retrieves header information of an incoming data packets on an ingress port 144 of the TOR switch, and then performs Access Control List (ACL) functions to determine if a packet has permission to pass through the TOR switch 102. Next, a check is run to see if a connection path was previously based on the information from within the packet header.
  • ACL Access Control List
  • TOR switch 102 may run Open Shortest Path First (OSPF), Border Gateway Protocol (BGP), Routing Information Protocol (RIP), or other algorithms to determine if the destination port is reachable by the TOR switch 102. If the TOR switch 102 cannot create a route to the destination network device, the packet is dropped. If the destination network device is reachable, the TOR switch 102 creates a new table entry with the egress port number, corresponding egress header information, and forwards the data packet to the egress port 146. Using this methodology, the TOR switch 102 transfers, or switches, the data packet from the ingress port 144 to the required egress port 146.
  • OSPF Open Shortest Path First
  • Border Gateway Protocol BGP
  • Routing Information Protocol RIP
  • Redundancy in a data center network is typically implemented by having a primary and a secondary path at each stage in the network.
  • data center network server 12A has two connections from the server to the Aggregation Layer switches designated here as Path A to data center network switch 14A and Path B to data center network switch 14B.
  • Path A to data center network switch 14A
  • Path B to data center network switch 14B.
  • data center network server 12A can only transmit the traffic over Path A or Path B.
  • Path A and Path B are configured as a primary and redundant path, where all the data traffic will pass through Path A to data center Switch 14A unless there is a failure in the data center network server 12A Path A transceiver within the data center network switch 14 A, or within the cable interconnections between data center network server 12A and data center network switch 14A. In this scenario, all the data traffic will be transferred over to Path B. Similar configurations exist for data center network storage devices 20.
  • connection points generally include a transceiver and a connector, which are often referred to as a port.
  • Ports can be copper or fiber ports that are built into the device, or the ports can be plug-in modules that contain the transceiver and connector, and that plug into Small Form Factor (SFF) cages intended to accept the plug-in transceiver/connector module.
  • SFF Small Form Factor
  • plug-in transceiver/connector modules examples include SFP, SFP+, QSFP, CFP, CXP, and other transceiver/connector modules, where the connector extends from an exterior surface of the device, e.g., from a front panel.
  • Copper ports may consist of RJ45 ports supporting Category 5, Category 5E, Category 6, Category 6A, Category 7 or other types of copper interfaces.
  • the fiber ports may be low density or single fiber ports, such as FC, SC, ST, LC, or the fiber ports may be higher density MPO, MXC, or other high density fiber ports.
  • Fiber optic cabling with the low density FC, SC, ST, or LC connectors or with SFP, SFP+, QSFP, CFP, CXP or other modules either connect directly to the data center network devices, or they pass through interconnector cross connect patch panels before getting to the data center network devices.
  • the cross connect patch panels have equivalent low density FC, SC, ST, or LC connectors, and may aggregate individual fiber strands into high density MPO, MXC or other connectors that are primarily intended to reduce the quantity of smaller cables run to alternate panels or locations.
  • FIG. 5 illustrates details of a conventional data center network switch (or router) 14 or 16 with SFF cages 210 mounted within the network switch 14 or 16 typically to a front or rear panel of the network switch enclosure.
  • External transceiver/connector modules 220 can be inserted into SFF cages 210.
  • the SFF cage 210, transceiver 220 and connector 222 forming the port 228.
  • CPU 202 configures switch logic 204 to control data streams through the switch 14 or 16 via paths 208 the port 228, i.e., transceiver 220 and connector 222.
  • the ports 228 may be copper or fiber ports.
  • connectors 222 can consist of either single copper RJ-45 connectors, single or duplex fiber connectors such as FC, SC, ST, or LC connectors, or multi-fiber connectors, such as MPO or MXC multifiber connectors.
  • Connectors 226 plug into the connectors 222, and cables 224 are used to communicate with other network devices. If the port 228 is a copper port, then cable 224 would be a copper cable that is terminated with an RJ-45 connector used as connector 226. If the port 228 is a simplex or duplex fiber port, then cable 224 would be a single fiber cable terminated with an FC, SC, ST, or LC connector as connector 226. If the port 228 is a high density fiber port, then cable 224 would be a high density fiber cable terminated with MPO or MXC connector as connector 226.
  • Fig. 6 shows a conventional data center network server 12 having a CPU 250 and associated logic and memory controlling the functionality of the server as is known in the art.
  • the server 12 may also include a video port interface 252, a USB port interface 254, a keyboard port interface 256, a mouse port interface 258, and a craft or RS232 port interface 260.
  • multiple fans 262 provide cooling, and redundant power supplies 264 provide power to the server 12.
  • conventional servers 12 use a network interface 270.
  • the network interface 270 has two network ports 228. The first port is the primary port for communicating with other data center network devices, and the second port is the redundancy port used for communicating with other data center network devices when the first port is not operational.
  • the two network ports are usually on a single Network Interface Card (NIC), but each port (i.e., the primary and secondary ports) may be on separate NIC cards for further redundancy.
  • NIC Network Interface Card
  • Using a plug-in NIC card permits different variations of copper or fiber network ports to be used with the server 12, where the variation used depends upon the particular data center network configuration.
  • Fig. 7 shows a conventional two port network interface 270 used in conventional data center network server 12.
  • Each port 228 in the two port network interface has an SFF cage 210 mounted within the server 12, typically to a front or rear panel of the data center network server enclosure.
  • An external transceiver/connector module 220 can then be inserted into the SFF cage 210.
  • CPU 250 communicates with the network link via cable 224 using Ethernet protocol or other communication protocols, shown in Fig. 7 as Ethernet logic 272.
  • connectors 226 can consist of either single copper RJ-45 connectors, or single or duplex fiber connectors.
  • the transceiver 220 may be a copper or fiber port.
  • cable 224 would be a copper cable that is terminated with an RJ-45 connector used as connector 226. If the two transceivers 220 are simplex or duplex fiber ports, then cable 224 would be a single fiber cable terminated with an FC, SC, ST, or LC connector as connector 226. Except for switches and routers, conventional network devices, e.g., servers, storage devices, and cross connects, typically do not include multi-fiber ports.
  • a conventional NIC may be used by a server (or other data center network endpoint device) as the network interface 270 to communicate with different data center network devices.
  • Fig. 8 illustrates a conventional Network Interface Card (NIC) 300 that can be used for such a purpose.
  • the NIC 300 is a plug-in card that provides an interface for the data center network endpoint device to interconnect with an external medium, such as cable 224 and connector 226.
  • a NIC card 300 may have a single port 228. In other cases, the NIC card 300 may have dual ports 228, with the second ports primarily as redundant or alternate path.
  • the NIC contains the desired interface for a particular application, such as a copper Ethernet interface, Wi-Fi interface, serial port interface, Fiber Channel over Ethernet (FCoE) interface, or other media interface.
  • the NIC communicates with the data center endpoint network devices it is plugged into via a Peripheral Component Interconnect (PCI) interface connection 308.
  • PCI Peripheral Component Interconnect
  • the PCI interface connection 308 is a common network device interconnect standard that plugs into a bus in the data center network device (not shown). In other server designs, a different local bus other than PCI bus may be implemented.
  • the PCI interface connection 308 communicates with PCI interface logic 304 via PCI interface bus 306.
  • the data center endpoint network device CPU (not shown) configures and controls the NIC by communicating with the PCI interface logic 304 through the PCI interface connection 308 and the PCI Interface bus 306.
  • each NIC card is designed for a specific implementation.
  • communication module 302 acts as control logic to convert the PCI interface 304 data stream format into a network switch data stream format for the port 228.
  • the transceiver 220 provides an OSI Layer 1 physical layer interface for the external medium, i.e., cable 224 and connector 226, while the communication module 302 provides OSI layer 2 processing for the external communications.
  • additional OSI layer functions may also be included within the NIC.
  • the present disclosure relates to data center architectures that implement high density connectors, low density connectors and/or combinations of high and low density connectors directly into data center endpoint network devices, such as servers, and storage devices, and any other endpoint network devices, as well as NIC cards that may be plugged into such data center endpoint network devices, thus simplifying cable interconnections between endpoint destinations and intermediary interconnect panels and cross connect panels, as well as to reducing the number of switches required within the data center network.
  • Port configurations disclosed in the present disclosure also permits discovery of end-to end connectivity through the use of managed connectivity cable methods, such as 9 ⁇ wire, connection point ID (CPID), and other methods within such endpoint network devices.
  • managed connectivity cable methods such as 9 ⁇ wire, connection point ID (CPID), and other methods within such endpoint network devices.
  • Knowledge of the end to end physical configurations in every path, including the discovery of per port path connectivity permits data center management on a per port and per cable connector basis, including the ability to identify changes in state of a physical connection in real time.
  • the endpoint network device includes a central processing unit, and a network interface in communication with the central processing unit.
  • the network interface includes at least one port configured to receive data streams from an external medium and to transmit data streams to an external medium, and a communication module that is capable of converting received data streams from a network protocol to a data format capable of being processed by the central processing unit, and capable of converting data in the data format processed by the central processing unit to a network protocol such that the network interface can transmit data streams.
  • the at least one port may include a set of ports, wherein each port in the set of ports includes a connector and at least one transceiver optically coupled to the connector.
  • the endpoint network device of this exemplary embodiment may further include an enclosure housing the central processing unit and the network interface, where the connector is mounted to a panel of the enclosure for connecting to external media.
  • the at least one transceiver is mounted within the housing such that the at least one transceiver is separated from the connector.
  • the connector can be optically coupled to the at least one transceiver using at least one fiber cable.
  • the at least one transceiver includes at least one multiport transceiver, and the connector includes a simplex, duplex, or high density fiber connector. Examples of high density fiber connectors include MPO and MXC connectors.
  • the at least one transceiver includes at least one multiport transceiver, and each of the at least one multiport transceiver ports can be connected to individual fiber connections on the connector.
  • the at least one transceiver includes at least one multiport transceiver and the connector is a high density fiber connector, where each of the multiport transceiver ports can be connected to the high density fiber connector with individual fiber connections.
  • the at least one transceiver includes at least one multiport transceiver, and each of the multiport transceiver ports can be configured as a redundant path connection. In some embodiments, the at least one transceiver includes at least one multiport transceiver, and each of the multiport transceiver ports can be configured as an alternate path connection permitting data streams to be automatically switched under the central processing control to different endpoints. In some embodiments, the at least one transceiver includes at least one multiport transceiver, and each of the multiport transceiver ports can be configured as switching an input data stream to a different port on an outgoing transceiver port without terminating the data stream on the endpoint network device.
  • the at least one transceiver includes at least one multiport transceiver, and each of the multiport transceiver ports can be configured as switching an input data stream to multiple different ports on outgoing transceiver ports for multicast or broadcast without terminating the data stream on the endpoint network device.
  • one or more of the ports in the set of ports includes managed connectivity ports capable of reading a physical location identification from a managed connectivity port of an external medium connected to the one or more ports in the set of ports.
  • the endpoint network device includes a central processing unit, and a network interface having at least one port configured to receive data streams from an external medium and to transmit data streams to an external medium, and a communication module in communication with the at least one port and with the central processing unit.
  • the at least one port in this embodiment includes a multi- fiber connector, a multiport transceiver and a multi-fiber interconnect cable connecting the multi-fiber connector to the multiport transceiver.
  • the communication module is preferably capable of switching data streams received on one fiber of the multi-fiber connector that pass through one fiber of the multi-fiber interconnect cable to the multiport transceiver, to the multiport transceiver through a second fiber of the multi- fiber interconnect cable to a second fiber of the multi-fiber connector in response to instructions from the central processing unit.
  • An exemplary embodiment of a data center network architecture includes at least one cluster of endpoint network devices, a distribution layer of network switches, and a core switching layer.
  • each endpoint network device includes a central processing unit, and a network interface in communication with the central processing unit.
  • the network interface may include at least one port configured to receive data streams from an external medium and to transmit data streams to an external medium, and a communication module that is capable of converting received data streams from a network protocol to a data format capable of being processed by the central processing unit, and capable of converting data in the data format processed by the central processing unit to a network protocol such that the network interface can transmit data streams.
  • Another exemplary embodiment of a data center network architecture includes at least one cluster of endpoint network devices a distribution layer of high density path switches, and a core switching layer.
  • each endpoint network device may include a central processing unit, and a network interface.
  • the network interface may include at least one port configured to receive data streams from an external medium and to transmit data streams to an external medium, wherein the at least one port includes a multi-fiber connector, a multiport transceiver and a multi-fiber interconnect cable connecting the multi-fiber connector to the multiport transceiver, and a communication module in communication with the at least one port and with the central processing unit, and capable of switching data streams received on one fiber of the multi-fiber connector that pass through one fiber of the multi-fiber interconnect cable to the multiport transceiver, to the multiport transceiver through a second fiber of the multi-fiber interconnect cable to a second fiber of the multi-fiber connector in response to instructions from the central processing unit.
  • the at least one port includes a multi-fiber connector, a multiport transceiver and a multi-fiber interconnect cable connecting the multi-fiber connector to the multiport transceiver, and a communication module in communication with the at least one port and with the central processing unit, and capable of switching data streams received on one fiber of the
  • each endpoint network device may include a central processing unit, and a network interface in communication with the central processing unit and having at least one port configured to receive data streams from an external medium and to transmit data streams to an external medium, and a communication module that is capable of converting received data streams from a network protocol to a data format capable of being processed by the central processing unit, and capable of converting data in the data format processed by the central processing unit to a network protocol such that the network interface can transmit data streams.
  • Another exemplary embodiment of a data center network architecture includes at least one cluster of endpoint network devices and a distribution layer of high density path switches.
  • each endpoint network device may include a central processing unit and a network interface.
  • the network interface includes at least one port configured to receive data streams from an external medium and to transmit data streams to an external medium
  • the at least one port includes a multi-fiber connector, a multiport transceiver and a multi-fiber interconnect cable connecting the multi-fiber connector to the multiport transceiver, and a communication module in communication with the at least one port and with the central processing unit, and capable of switching data streams received on one fiber of the multi-fiber connector that pass through one fiber of the multi- fiber interconnect cable to the multiport transceiver, to the multiport transceiver through a second fiber of the multi-fiber interconnect cable to a second fiber of the multi-fiber connector in response to instructions from the central processing unit.
  • FIG. 1 is a block diagram of a conventional data center network architecture illustrating a three tier switching architecture
  • FIG. 2 is a block diagram of a conventional logical data center network topology
  • FIG. 3 is a block diagram of a row architecture in a conventional data center network
  • FIG 4 is a flow diagram for a top of rack switch in a conventional data center network
  • FIG. 5 is a block diagram of a conventional data center network switch architecture with ports having external transceivers insertable into SFF cages;
  • FIG. 6 is a block diagram of a conventional data center network server, illustrating in part a network interface in communication with a CPU;
  • FIG. 7 is a block diagram of a conventional data center network interface of Fig. 6, illustrating two ports each with an external transceiver insertable into an SFF cage;
  • FIG. 8 is a block diagram of a conventional data center network interface of Fig. 6, illustrating a NIC with a port having an external transceiver insertable into an SFF cage;
  • FIG. 9 is a block diagram of an embodiment of a data center network device according to the present disclosure, illustrating a server with a built in network switch;
  • Fig. 10 is a block diagram of an embodiment of the built in network switch of Fig. 9;
  • Fig. 11 is a block diagram of another embodiment of the built in network switch of Fig. 9;
  • Fig. 12 is a block diagram of another embodiment of the built in network switch of Fig. 9.
  • Fig. 13 is a block diagram of multiple endpoint network devices with various embodiments of the built in network switch of Fig. 9;
  • FIG. 14 is a block diagram of an exemplary embodiment of a data center network architecture according to the present disclosure, illustrating a two tier switching architecture
  • FIG. 15 is a block diagram of another exemplary embodiment of a data center network architecture according to the present disclosure, illustrating a two tier switching architecture with high density path switches in a distribution layer;
  • FIG. 16 is a block diagram of another exemplary embodiment of a data center network architecture according to the present disclosure, illustrating a single tier switching architecture with high density path switches as a distribution layer.
  • a data center network device includes servers, storage devices, cross connect panels, network switches, routers and other datacenter network devices.
  • a data center endpoint network device includes servers, storage devices and other network devices, but does not include cross connect panels, network switches or routers.
  • the endpoint network device is a network server 400.
  • the network server 400 has a CPU 410 and associated logic, e.g., interface logic 412 and control logic 414, and memory, e.g., memory modules 416 and hard disk 418, controlling the functionality of the network server 400, as is known in the art.
  • the network server 400 may also include a video port interface 420, a USB port interface 422, a keyboard port interface 424, a mouse port interface 426, and a craft or RS232 port interface 428 that communicate with the CPU 410 via interface logic 412.
  • network server 400 For communicating with different endpoint network devices, network server 400 uses a network interface 402.
  • the network interface 402 has a port 440 and a communication module 450.
  • the port 440 includes a multiport transceiver 442, multiport connector 444, and multi-fiber interconnection cable 446.
  • the communication module 450 along with port 440 permit the CPU 410 to direct traffic (e.g., data packets) from CPU 410 to one or more of the multi-fiber interconnection cable 446 connected to multiport transceiver 442.
  • the CPU 410 configures the communication module 450, and thus the flow of data streams from the CPU to one or more interconnects within multi-fiber interconnect cable 446 within the port 440.
  • Fig. 10 illustrates the network interface 402 of Fig. 9 in more detail, namely providing more detail of an exemplary embodiment of the communication module 450.
  • the CPU 410 communicates to external devices via a known communication protocols, such as the Ethernet protocol, and passes data to protocol logic 452, here Ethernet protocol, over CPU Interface 460 which converts the data into Ethernet protocol packets and passes the Ethernet packets to switch logic 454 over interface 456.
  • CPU 410 configures switch logic 454 to direct the Ethernet packets from Ethernet logic 452 to one or more multi-fiber interconnect cable 446 via multiport transceiver 442 in port 440.
  • the CPU 410 can direct the Ethernet packets to a single outgoing interconnect (or fiber) of multi-fiber interconnect cable 446 via transceiver 442 in port 440.
  • CPU 410 may direct switch logic 454 to transmit the Ethernet packets to multiple fibers in interconnect cable 446 via transceiver 442 in port 440.
  • switch logic 454 to transmit the Ethernet packets to multiple fibers in interconnect cable 446 via transceiver 442 in port 440.
  • the switch can cross connect a receive signal stream from one transceiver port to an outgoing transceiver port without passing through the Ethernet logic circuitry
  • the signal stream may contain any form of serial bit stream including encrypted data formats.
  • the transceiver ports and switch logic can pass this signal stream unaffected from input to output without knowing the signal stream structure or contents.
  • FIG. 11 Another embodiment of the present disclosure is shown in Fig. 11.
  • the network interface is included on a Network Interface Card (NIC) 500.
  • the network interface has a port 520, communications module 501 consisting of switch logic 504, and protocol logic 502 mounted on the NIC 500.
  • the port 520 includes multiport transceiver 522, multiport connector 524 and multiport interconnect cable 526. Port 520 is similar to port 440 described above.
  • the NIC 500 can be installed within an endpoint network device to create a high density endpoint network device, as described herein. It should be noted that while the port 520 with one multiport transceiver 522, multiport connector 524 and multi-fiber interconnect cable 526 is shown, the present disclosure contemplates having a NIC with more than one such port with multiport transceivers in numerous configurations.
  • the NIC 700 is similar to the NIC 500 of Fig. 11 and further includes the implementation of cable identification functionality.
  • the NIC 700 includes a network interface that includes port 710, communication module 730, media reading interface 750, and PCI interface 770.
  • the port 710 includes a multiport transceiver 712, multiport connector 714, and multi-fiber interconnection cable 716.
  • the communication module 730 includes switch logic 732, which is similar to switch logic 454 (seen in Fig. 10) and 504 (seen in Fig. 11), and protocol logic 734, which is similar to protocol logic 452 (seen in Fig.
  • the NIC 700 also includes an adapter 718 within connector 714 that has the capability to detect the presence of adapter 722 within cable connector 720 to read specific cable information via media reading interface 750.
  • the media reading interface 750 and adapters 718 and 722 may be designed with ninth wire technologies interfaces, RFID tagging technology interfaces, connection point ID (CPID) technology interfaces, or other cable managed intelligence technologies.
  • the data center network NIC 700 may be designed with one or more of these different technology interfaces in order to provide the capabilities of supporting more than one particular managed intelligent technology.
  • Each data center network NIC 700 equipped with intelligent cable interfaces has the capability to determine the cable presence and/or cable information available to the media reading interface 750 depending upon the information provided from the intelligent cable.
  • the cable information read from media interface adapter 718 via media reading interface bus 752 by media reading interface 750, and provided to the endpoint network device CPU, via PCI Interface 770 and PCI interface connection 772, may include for each cable connection of the cable type, cable configuration, cable length, cable part number, cable serial number, and other available information.
  • the endpoint network device CPU can use this information to determine end to end information regarding the overall communication path and the intermediary connections that make up an end-to-end path.
  • the fibers from the multi-fiber interconnect cables 446, 526 or 716 can be implemented as independent connections, each to the same or to different endpoint destinations.
  • Switch logic 454, 504, or 732 can be programmed or reprogrammed in real time to route traffic (e.g., data packets or complete data paths) over one interconnect in interconnect cable 446 via transceiver 442, or over one interconnect in interconnect cable 526 via transceiver 522, or over one interconnect in interconnect cable 716 via transceiver 712, to the same or to different endpoint destinations.
  • traffic e.g., data packets or complete data paths
  • switch logic 454, 504, or 732 can also be programmed or reprogrammed in real time to route traffic (e.g., data packets or complete data paths) over multiple interconnects in interconnect cable 446 via transceiver 442, or over multiple interconnects in interconnect cable 526 via transceiver 522, or over one interconnect in interconnect cable 716 via transceiver 712, to the same or to different endpoint destinations.
  • traffic e.g., data packets or complete data paths
  • switch logic 454, 504, or 732 can also be programmed or reprogrammed in real time to route traffic (e.g., data packets or complete data paths) over multiple interconnects in interconnect cable 446 via transceiver 442, or over multiple interconnects in interconnect cable 526 via transceiver 522, or over one interconnect in interconnect cable 716 via transceiver 712, to the same or to different endpoint destinations.
  • Such configurations provide fast, accurate switchover from one network configuration to a different network configuration with
  • Examples of multiport transmitter components and paired multiport receiver components for the transceivers include the AFBR-77D1SZ- Twelve-Channel Transmitter and AFBR-78D1SZ- Twelve-Channel Receiver manufactured by Avago Technologies.
  • Providing one or more ports with multiport transceivers and associated switch logic on a NIC adds new capabilities to endpoint network devices. For example, providing one or more ports with multiport transceivers and associated switch logic on a NIC provides capabilities of adding redundant paths and/or alternate paths for the transmission of traffic. Adding additional port capability to the NIC card permits redundant paths to be set up such that a failure of a primary path permits the switch logic to reconfigure the transceiver so that the data transmission and reception can occur over a different interconnect within the interconnection cable, e.g., cables 446, 526 or 716. Using multiport transceivers permits the use of multiple redundant paths.
  • the switch logic can be configured to transmit a data stream to one endpoint destination via one transceiver port within transceiver 442, 522 or 712, while another data stream may be transmitted via another transceiver port within transceiver 442, 522 or 712 to a different endpoint destination over a different interconnect (or path).
  • This permits the endpoint network device to send data streams directly to multiple different destinations rather than sending the data streams to a network switch, which in turn routes the traffic to the end destination.
  • the data center network device can reduce the transmission time of the connection by eliminating the network switch preprocessing and routing transfer time in order to connect to two or more different end points.
  • a port includes any of the port types described herein, but this disclosure is not intended to limit the ports contemplated herein, which are provided as exemplary embodiments for the ports that may be used.
  • the port types may include different connector implementations - such as an FC, SC, ST, LC, or other type of single or duplex fiber connector, or a high density port such as an MPO, MXC or other high density multi-fiber panel connector.
  • Individual ports can be dynamically bonded together to create higher bandwidth ports such as bonding four lOGbps ports to form a single 40Gbps interface port, ten lOGbps ports to create a single lOOGbps interface port, four 25Gbps ports to create a single lOOGbps interface port, or other combinations of ports to form multifiber connections between data center network devices.
  • This capability enables data centers to dynamically scale from using data center network devices that operate using lOGbps ports to data center network devices that operate using 40Gbps, lOOGbps ports, or ports with data rates greater than lOOGbps.
  • the ports of the present disclosure permit the use of all fibers in the IEEE802.3ba 40GBASE-SR4 optical lane assignments, ffiEE802.3ba 100GBASE-SR10, or IEEE802.3ba 100GBASE-SR4 optical lane assignments within the connector and allow data center network devices, e.g., interconnect panels and switches, to separate individual links from bonded links.
  • data center network devices e.g., interconnect panels and switches
  • This also permits the expansion of high density fiber configurations, e.g., 12 fiber MPO configurations, to 24, 48, 72, or greater high density fiber combinations in order to support multi-rate and multi-fiber applications in the same connector.
  • the network interface 402, 500 or 700 can be configured to terminate multifiber bonded ports such as 40Gbps or lOOGbps.
  • endpoint network devices such as servers and storage devices, to bond and un-bond fiber pairs, the endpoint network device according to the present disclosure can create bonded pairs that traverse multiple connectors.
  • Switch logic 454, 504 or 732 can be configured to support redundant or alternate path connections for multifiber bonded ports such as 40Gbps or lOOGbps in a single multiport transceiver 442, 522 or 712, or to alternate multiport transceivers 442, 522 or 712 on the network interface port 440, 520 or 710.
  • the two or more separate paths can be configured such that the connection medium is the same, and the overall length of each path is substantially the same to minimize differential delays.
  • FIG. 13 is an exemplary embodiment of multiple endpoint network devices each having a network interface according to the present disclosure, e.g. connecting Endpoint Network Device 2 and Endpoint Network Device 3 to Endpoint Network Device 1, which switches the signal from Endpoint Network Device 2 to Endpoint Network Device 3.
  • network interface 402 of Fig. 10 is deployed in Endpoint Network Device 1.
  • Endpoint Network Device 2 and 3 may have network interface 402 which will provide switching also at the Endpoint Network Device 2 or 3, or may have traditional network interface 270 which does not have any switching capabilities at the endpoint.
  • the NIC 500 of Fig. 11 or the NIC 700 of Fig. 12 may be used in one or more of the Endpoint Network Devices as well.
  • the port 440 along with the switch logic 454 provide a switch function, such that data path routes can connect from one input interconnect (e.g. fiber) of cable 446, e.g., fiber 446A, through transceiver 442 where the optical signal is converted from an optical signal to an electrical data signal, and is then sent to the switch logic 454 via path 458A.
  • the switch logic 454 routes the signal directly back to the transceiver 442 via path 458B where the transceiver converts the electrical data signal to an optical signal for transmission along a fiber in cable 446, e.g., fiber 446B, to connector 444.
  • the electrical data signal is fed via path 458A to the switch logic 454, which under control of CPU 410 has configured a connection 454A to path 458B into an outgoing path to transceiver 442.
  • Transceiver 442 converts the data signal into an optical signal which then is transmitted over fiber 446B to outgoing fiber 464B which is connected to Endpoint Network Device 3.
  • a parallel path can be similarly set up from the Endpoint Network Devices, e.g., from Endpoint Network Device 3 to Endpoint Network Device 2.
  • the network interfaces 402 in the Endpoint Network Devices can switch a signal on an input path to one or more of multiple output paths directly, thus eliminating the need of an Application Layer Network Switch in at least some applications in the data center architecture.
  • the network interfaces 402 in the Endpoint Network Devices can switch a signal on an input path 464 to multiple output paths 464 for multicast or broadcast applications.
  • Fig. 14 shows an exemplary data center network architecture according to the present disclosure.
  • Aggregation Layer switches are replaced by direct endpoint network device to endpoint network device connections.
  • direct server to server connections direct server to storage connections, direct storage to storage connections, direct server to Distribution Layer connections, and direct storage to Distribution Layer connections.
  • data center endpoint network devices e.g., data center network servers are deployed in cluster group 1 and cluster group 2.
  • the data center network servers can be the data center network server 400 shown in Fig. 9 and described above.
  • the network interface deployed in the data center endpoint network device can be the exemplary network interfaces shown in Figs. 10, 11 or 12. For ease of description, this embodiment will be described as using the network interface 402 of Fig. 10.
  • the data center network servers 400 with switch 454 and a port 440 having a multiport transceiver 442 has multiple external links which can directly connect to one or more data center endpoint network devices, such as switches, other servers, or storage devices.
  • data center network storage 480 with switch 454 and a port 440 having a multiport transceiver 442 has multiple external links which can directly connect to one or more data center endpoint network devices, such as switches, servers, or other storage devices.
  • data center network server 400 A has direct connections not only to the distribution layer data network switch 16A and 16B via Path A and Path B, but also to data center network server 400B, 400C, and 400D via Path C, Path D, and Path E.
  • data center network server 400A, 400B, 400C, and 400D as shown in this embodiment is referred to as a cluster.
  • data center network server 400 A can pass data traffic directly to data center network server 400B, 400C, and 400D without incurring the switch delay within an Aggregation Layer network switch 14.
  • each endpoint network device e.g., server 400
  • server 400A can interconnect to the rest of the network by connecting to Path C, Path D or Path E to data center network servers 400B, 400C, or 400D, which in turn can switch the data traffic up to the distribution layer data center network switch 16A or 16B.
  • the cluster group size which is the number of data center network servers or storage devices that can be interconnected to each other or to distribution layer switches increases as the number of ports supported by network interface switch logic increases.
  • the distribution layer has high density path switches 492.
  • the high density path switches 492 support a larger number of paths that can be switched when compared to a standard distribution layer switch 16. Examples of a suitable high density path switches are described in Appendix A, which is attached to the present application and part of the present disclosure.
  • a number of different network topology configurations may be implemented.
  • the number of redundant paths in each cluster can be increased.
  • additional paths can be connected from the data center network endpoint network devices, e.g., servers 400, to the data center high density path switches 492 to be implemented as multiple paths in parallel for additional bandwidth.
  • the number of data center endpoint network devices, e.g., network servers 400 and network storage devices 480 capable of being supported within a larger cluster group can increase.
  • the number of cluster groups interconnected by a single data center high density path switch 492 increases thus reducing the total number of distribution layer switches for the same number of data center endpoint network devices, e.g., servers 400 and storage devices 480.
  • the increase of the path interconnections at the data center endpoint network devices level can reduce or eliminate the reliance on the core switching layer, as shown in Fig 16.
  • the data center high density path switches 492 include a network interface, such as network interface 402 seen in Fig. 10, or NIC 500 seen in Fig. 11 or NIC 700 of Fig. 12. With the switching function built into the data center high density path switches, the switches 492 can be interconnected to provide the interconnections needed to connect to all endpoint network devices in the data center network configuration.
  • the network interfaces contemplated by the present disclosure can utilize various communication protocols for network communications. Further, the network interfaces may use various embodiments of transceivers and connectors for communication paths. As another example, the data center network architectures contemplated by the present disclosure can include single layer and multi-layer switching layers.
  • the present application relates generally to network equipment typically used in data centers, and more particularly to network devices with increased port density and efficiency.
  • connection points generally include a transceiver and a connector, which are often referred to as a port.
  • Ports can be copper or fiber ports that are built into the device, or the ports can be plug-in modules that contain the transceiver and connector and that plug into Small Form Factor (SFF) cages intended to accept the plug-in transceiver/connector module, such as SFP, SFP+, QSFP, CFP, CXP, and other transceiver/connector modules, where the connector extends from an exterior surface of the device, e.g., from a front panel.
  • Fiber ports may be low density or single fiber ports, such as FC, SC, ST, LC, or the fiber ports may be higher density MPO, MXC, or other high density fiber ports.
  • Fiber optic cabling with the low density FC, SC, ST, or LC connectors or with SFP, SFP+, QSFP, CFP, CXP or other modules either connect directly to the data center network devices, or they pass through interconnector cross connect patch panels before getting to the data center network devices.
  • the cross connect patch panels have equivalent low density FC, SC, ST, or LC connectors, and may aggregate individual fiber strands into high density MPO, MXC or other connectors that are primarily intended to reduce the quantity of smaller cables run to alternate panels or locations.
  • FIG. 1 shows a prior data center network device 10, that is a network switch, with ports 110, each having a transceiver 111 and connector 112, mounted internally to the device 10, such that the connector extends out of a front or rear panel of the device.
  • CPU 102 configures switch logic 104 to direct internal data streams (not shown) out via paths 108 through transceiver 111 and connector 112 in port 110.
  • Ports 110 may be copper or fiber ports.
  • a copper cable (cable 114A) is terminated with an RJ-45 connector (connector 116A), while fiber cable (cable 114B) is terminated with an FC, SC, ST, or LC connector (cable 116B).
  • FIG. 2 shows a prior data center network device 20 where SFF cages 118 and 124 are mounted within the device 20, typically to a front or rear panel, and external transceiver/connector modules can be inserted into SFF cages 118 or 124.
  • CPU 102 configures switch logic 104 to direct internal data streams (not shown) out via paths 108 through transceiver 121 and connector 122, or through transceiver 126 and connector 128.
  • connectors 122 can consist of either single copper RJ-45 connectors, or single or duplex fiber connectors. Duplex fibers in this case are for bidirectional path communications.
  • Connectors 128 can consist of multi-fiber connectors, such as MPO multifiber connectors.
  • SFP or SFP+ transceiver modules permits a single connection to be configured between two data center network devices at data rates of up to lOGbps.
  • QSFP, CFP, CXP, or other transceivers permits a single connection to be configured between two data center network devices at data rates of up to and beyond lOOGbps.
  • MPO multifiber connectors are used for IEEE 802.3ba industry standard 40Gbps and lOOGbps bandwidth fiber connections.
  • Fig. 3 shows IEEE 802.3ba 40GBASE-SR4 optical lane assignments where 40Gbps bandwidth is achieved by running four fibers of lOGbps in one direction (Tx) for the 40Gbps transmit path, and four fibers of lOGbps in the other direction (Rx) for the 40Gbps receive path. This means four fibers in the 12 fiber MPO are unused, thus decreasing connector and cable efficiency.
  • lOOGbps bandwidth fiber connections are achieved by running 10 fibers of lOGbps in one direction (Tx) for the lOOGbps transmit path, and 10 fibers of lOGbps in the other direction (Rx) for the lOOGbps receive path.
  • Fig. 4A shows two IEEE 802.3ba 100GBASE-SR10 optical lane assignments for 12 fiber MPO's, where one MPO uses 10 fibers of lOGbps for the lOOGbps transmit path (Tx), leaving 2 fibers unused, and the other MPO uses 10 fibers of lOGbps for the lOOGbps receive path (Rx), leaving 2 fibers unused, again decreasing connector and cable efficiency.
  • Fig. 4A shows two IEEE 802.3ba 100GBASE-SR10 optical lane assignments for 12 fiber MPO's, where one MPO uses 10 fibers of lOGbps for the lOOGbps transmit path (Tx), leaving 2 fibers unused, and the other MPO uses 10 fibers of lOGbps for the lOO
  • 4B shows a 24 fiber MPO, where 10 fibers of lOGbps are used for the lOOGbps transmit path (Tx), plus 10 fibers of lOGbps are used for the lOOGbps receive path (Rx), leaving a total of 4 unused fibers, again decreasing connector and cable efficiency.
  • the industry standard method of migrating from a lOGbps connection to a 40Gbps or lOOGbps connection, or from a 40Gbps connection to a lOOGbps connection requires reconfiguring the fiber transmit and receive paths by physically changing the ports within the data center network devices increasing the cost to run the data center. Adding further to the cost to run the data center is that this change has to occur at both ends of the path (i.e., the receive port and the transmit port) as well as the cabling there between.
  • the entire data center network device has to be upgraded as the transceiver/connector configuration of Fig. 1, or the transceiver/connector/SFF cage configuration of Fig. 2 cannot support the higher data rate speeds on the additional fiber ports associated with 40Gbps or lOOGbps ports.
  • fibers are left unused in the connectors and cables, thus wasting resources and unnecessarily increasing costs for the higher fiber cabling and connectors.
  • connector 132 (seen in Fig. 2) is a 12 fiber MPO connector and fiber cable 130 is a 12 fiber cable. To use this cable and connector in a 40Gbps or lOOGbps application would leave 2 or 4 fibers unused, depending upon the type of port used.
  • the ports 110 i.e., the transceiver 111 and connector 112 in Fig. 1, or the transceiver 121, connector 122 and SFF cage 118 in Fig. 2 are connected directly to front or rear panels of the network device.
  • the physical size of the transceiver or SFF module significantly limits the number of connectors 112 or cages 118 that can be installed on the front or rear panels of the network device, thus limiting the ability to cost effectively increase port density.
  • the present application relates generally to data center network device architectures that implement high density ports, low density ports and combinations of high density and low density ports, for effective use of data center network device panel space thus increasing port density without the need to replace network devices, connectors and/or transceivers.
  • Data center network devices contemplated by the present application include servers, storage devices, NIC cards, switches, and routers.
  • the present application introduces new methods for increasing the density of the optical interface circuitry within data center network devices to achieve higher density on the device front panel. Additionally, by using combinations of ports, dynamic mixing of speeds of fiber connections within high density fiber connectors on a per fiber basis can be achieved.
  • Port configurations disclosed in the present application also provides discovery of end-to end connectivity through the use of managed connectivity cable methods such as 9 th wire, CPID, and other methods. Knowledge of the end to end physical
  • An exemplary embodiment of a data center network device includes, a housing having one or more connection panels, and a set of ports. Each port within the set of ports is configured to receive data streams from an external medium and to transmit data streams to an external medium, and includes a connector and at least one transceiver optically coupled to the connector. The connector is mounted to the connection panel, and the at least one transceiver is mounted within the housing such that the at least one transceiver is separated from the connector.
  • the at least one transceiver may be mounted to a circuit board within the housing or plugged into a cage, e.g., an SFF cage, mounted within the housing.
  • the connector is optically coupled to the at least one transceiver using fiber cables and/or optical waveguides.
  • the transceivers employed in the present application may be low density transceivers, high density transceivers, or combinations of low density transceivers and high density transceivers.
  • Examples of transceivers that may be used in the present application include, SFP, SFP+, QSFP, CFP, CXP, and WDM transceivers, and if the transceiver is pluggable in a cage, the cage would be a compatible cage for the transceiver used.
  • FIG. 1 is a block diagram of a prior data center network device architecture with internal ports
  • FIG. 2 is a block diagram of a prior data center network device architecture with external insertable ports
  • FIG. 3 shows IEEE 802.3ba 40GBASE-SR4 optical lane assignments
  • FIG. 4 shows IEEE 802.3ba 100GBASE-SR10 optical lane assignments
  • FIG. 5 is a block diagram of an exemplary embodiment of a data center network device according to the present application with internally mounted insertable ports;
  • FIGs. 5A-5C are block diagrams of exemplary embodiments of the different internally mounted insertable ports used in the data center network device of Fig. 5;
  • Fig. 6 is a block diagram of another exemplary embodiment of a data center network device according to the present application with internal high density ports;
  • FIGs. 6A-6G are block diagrams of exemplary embodiments of the different internally mounted insertable ports used in the data center network device of Fig. 6;
  • FIG. 7 is a block diagram of an exemplar ⁇ ' embodiment of a data center NIC according to the present application with internal high density ports;
  • FIG. 8 is a block diagram of an exemplary embodiment of a data center network device according to the present application with internal high density ports and intelligent managed connectivity capabilities.
  • references to input and output, transmit and receive are used as references to simplify explanations.
  • inputs may be outputs, they may switch direction from the output side to the input side, or they may be bidirectional signals. This is similar for the terms transmit and receive.
  • the data center network device 30 is a network switch.
  • the device 30 may be a server, storage device, NIC card, router or other data center network device.
  • the data center network device 30 includes a housing 32 for installation in a rack within the data center.
  • the housing includes a front panel 34 and a rear panel 36 that can be used as a connection point for external connection to other data center network devices.
  • a set of ports is used for transmitting and receiving of data streams between the data center network device 30 and other external data center network devices.
  • the data center network device in the embodiment of Fig. 5 is a switch, which includes switch logic 538 connected to each port via interconnections 540, and a CPU 542 connected, via interconnection 544, to the switch logic 538.
  • the CPU 542 is configured to control the switch logic 538, and thus the flow of data streams from one port to the same or another port within the switch.
  • the ports that may be used in the set of ports contemplated by the present application may vary.
  • a port includes any of the port types described herein, but this disclosure is not intended to limit the ports contemplated herein and are provided as exemplary embodiments for the ports that may be used.
  • the first port type 500 is a low density port having a low density panel connector 502, a compatible low density cable 504 connected between the connector 502 and a compatible low density transceiver in SSF 506 mounted within the housing 32.
  • the low density panel connector 502 is preferably an FC, SC, ST, LC, or other type of single or duplex fiber connector
  • the compatible low density transceiver in SFF 506 is an SFP, SFP+, or other type of single or duplex fiber transceiver plugged into an SFF cage configured to receive the pluggable transceiver.
  • External connections to the low density ports 500 are with single fiber or duplex fiber cables 552 using FC, SC, ST, LC, or other types of single or duplex fiber connector 550.
  • the second port type employed in the embodiment of Fig. 5 is a high density port 510, shown in Fig. 5B, having panel connector 512, and a compatible high density cable 514 connected between the connector 512 and a compatible high density transceiver in SFF 516 mounted within the housing 32.
  • the high density panel connector 512 is preferably an MPO, MXC or other high density multi-fiber panel connector used for industry standard 40Gbps and lOOGbps applications
  • the compatible high density transceiver in SFF 516 is a QSFP, CFP, CXP type, or other high density pluggable transceiver used for industry standard 40Gbps and lOOGbps applications plugged into an SFF cage configured to receive the pluggable transceiver.
  • panel connector 512 is configured according to industry standard fiber configurations. External connections to the high density ports 510 are with multi-fiber cables 556 using MPO, MXC or other high density multi-fiber connectors554.
  • the third port type employed in the embodiment of Fig. 5 is a high density port 520, shown in Fig. 5C, having panel connector 522, multiple compatible high density cables 524 connected between the connector 522 and multiple compatible high density transceivers in SFF 526 mounted within the housing 32.
  • the high density panel connector 522 is a multi-fiber MPO or MXC type panel connector coupled to three compatible high density transceivers in SFF 526, such as SFP, SFP+, QSFP, CFP, CXP type, or other high density transceivers plugged into an SFF cage configured to receive the pluggable transceiver.
  • the third port configuration permits multiple simplex or duplex fiber communications paths from one or more transceivers in SFF 526 to a single MPO or MXC connector 522 independent of each other.
  • External connections to the high density ports 520 are with multi-fiber cables 558 using MPO, MXC or other high density multi-fiber connectors 560.
  • the pluggable transceivers used in each port may be low density or high density transceivers or a combination of low density and high density transceivers.
  • a transceiver has a receiver which receives a data stream from an external medium connected to the data center network device 30, and a transmitter which transmits a data stream to the external medium connected to the data center network device.
  • Examples of low density transceivers include SFP, SFP+, type transceivers
  • examples of high density transceiver include QSFP, CFP, CXP type, or other high density transceivers.
  • Transceiver chips such as the FTLX8571D3BCV, manufactured by Finisar Corp. may be employed as the low density transceiver, and transceiver chips, such as the
  • FTLQ8181EBLM also manufactured by Finisar Corp. may be employed as the high density transceiver.
  • the present application is not limited to connectors, transceivers and/or SSF cage configurations capable of supporting data rates of up to lOOGbps.
  • the embodiments of the present application can also support data rates greater then lOOGbps.
  • the transceivers in SFF 506, 516 and 526 are configured in the housing in a staggered arrangement away from the front panel 34 (or rear panel 36) such that each transceiver in SFF is not connected directly to the front panel 34. Only the connectors 502, 512 and 522 are connected to the front panel 34 (or rear panel 36) of the housing 32. This configuration allows more connectors to be connected to the front panel of the device 30, thus increasing the panel density of the device.
  • the data center network device 30 of the present application permits multiple lOGbps, 40Gbps, and lOOGbps connections in the same high density connectors 522.
  • high density MPO connectors can support up to 72 fibers
  • high density MXC connectors can support up to 64 fibers.
  • the fiber cable group 560 for example, can fan out to as many ports needed to support the desired fibers for the high density connector 558.
  • the fibers in cable 560 may all terminate into a single data center network device at a remote end of the cable 560, or may be split up via interconnect panels, cross connect panels, hydra cables or other devices capable of splitting the fiber cables, such that the fiber ends are physically routed to different data center network devices.
  • the data center network device 60 is a network switch.
  • the device 60 may be a server, storage device, NIC card, router or other data center network device.
  • the data center network device 60 includes a housing 32, for installation in a rack within the data center.
  • the housing 32 includes a front panel 34 and a rear panel 36 that can be used as a connection point for external connection to other data center network devices.
  • a set of ports is used for transmitting and receiving of data streams between the data center network device 60 and other external data center network devices.
  • a switch which includes switch logic 692 connected to each port via interconnect 690, and a CPU 696 connected, via interconnect 694, to the switch logic 692.
  • the CPU 696 is configured to control the switch logic 692, and thus the flow of data streams from one port to the same or another port within the switch.
  • the ports that may be used in the set of ports contemplated by the present application may vary.
  • a port includes any of the port types described herein, but this disclosure is not intended to limit the ports contemplated herein and are provided as exemplary embodiments for the ports that may be used.
  • Fig. 6 shows several embodiments of transceiver and port connections with additional details of these embodiments shown in Figs. 6A-6D and 6G, and with additional embodiments shown within Figs. 6E and 6F. These transceivers are collectively referred herein as transceivers 698 for ease of reference.
  • Individual lOGbps ports can be dynamically bonded together to create 40Gbps ports and/or to create lOOGbps ports to form multifiber connections between data center network devices.
  • This capability enables data centers to dynamically scale from using data center network devices that operate using lOGbps ports to data center network devices that operate using 40Gbps, lOOGbps ports, or ports with data rates greater than 100 Gbps.
  • the ports of the present application permit the use of all fibers in the IEEE802.3ba 40GBASE-SR4 optical lane assignments or IEEE802.3ba 100GB ASE- SR10 optical lane assignments within the connector and allow data center network devices, e.g., interconnect panels and switches, to separate individual links from bonded links.
  • This also permits the expansion of high density fiber configurations, e.g., 12 fiber MPO configurations, to 24, 48, 72, or greater high density fiber combinations in order to support multi-rate and multi-fiber applications in the same connector.
  • This capability also permits the expansion of high density fiber configuration, e.g., 12 fiber MPO configurations, to MXC or other high fiber count configurations without the need for predefined bonding for multi-fiber applications in the same connector.
  • the data center network device can create bonded pairs that traverse multiple connectors.
  • the two or more separate paths can be configured such that the connection medium is the same, and the overall length of each path is substantially the same to minimize differential delays.
  • a further capability of the data center network device of the embodiment of Fig. 6, is the capability to permit multiple lOGbps, 40Gbps, and lOOGbps connections in the same high density connectors.
  • transceivers 698 which in this embodiment are multiport transceivers
  • CPU 696 can program switch logic 692 to dynamically map the individual ports to a fiber cable such that all the fibers can be used within the connector to provide multi-rate communications capabilities within the same connector for different connection paths.
  • switch logic 692 can be configured to provide a fixed data reception and transmission rate from one transceiver port to another transceiver port.
  • the switch logic 692 can be programmed by CPU 696 to receive one data rate from one receiver port and transmit out at a different rate on a different transmit port.
  • the transceivers 698 and switch logic 692 provide the data rate retiming and data buffering necessary to support different rate transmit and receive connections.
  • the first port 600 shown in Fig. 6A, includes a multi-port transceiver 602 and single or duplex fiber panel adapters 604, such as FC, SC, ST, LC, or other type of single or duplex fiber panel adapters.
  • the transceiver 602 is connected to the panel adapter 604 via
  • Interconnect 606 may be an optical fiber cable, optical waveguide, or other mechanism to couple the optical signals between the transceiver 602 and the front panel 34 (or rear panel 36) mounted fiber connector 604.
  • This configuration is an example of a multi-port transceiver 602 configured as individual fiber connections independent of each other.
  • One advantage of this configuration is that the port density can be much greater since the individual multi-port transceiver 602 occupies less printed circuit board real estate than multiple single port transceivers.
  • the second port 610 includes a multi-port transceiver 612 and a high density panel connector 614, such as an MPO, MXC, or other high density connector.
  • the transceiver 612 connects to the multi-fiber high density connector 614 via fiber interconnect 616.
  • the interconnect 616 may be an optical fiber cable, optical waveguide, or other mechanism to couple the optical signals between transceiver 612 and the front panel 34 (or rear panel 36) mounted fiber connector 614.
  • This configuration is an example of combining multiple independent simplex or duplex optical ports from a transceiver for connection to a single multi-fiber cable 682. This permits aggregation of multiple independent fiber links for delivery to a single endpoint or to be separated within patch panels, hydra cables, or other mechanisms to be distributed to different end destinations or nodes.
  • the third port 620 includes a transceiver 622 and a high density multi-fiber connector 624, such as an MPO, or other high density fiber connector used for industry standard 40Gbps and lOOGbps applications.
  • the transceiver 622 is connected to connector 624 via a compatible multi-fiber interconnect 626.
  • the interconnect 626 may be an optical fiber cable, optical waveguide, or other mechanism to couple the optical signals between transceiver 622 and the front panel 34 (or rear panel 36) mounted fiber connector 624. This configuration supports industry 40Gbps and lOOGbps connections using lOGbps data rates per fiber, or lOOGbps connections using 25Gbps data rates per fiber.
  • the transceivers bond individual transceiver ports together as low skew transmission and receive groups of channels to form multi-fiber connections to a data center network device connected to the far end of the cable that is connected to the connector 624.
  • the transceiver can provide 40Gbps, lOOGbps or greater transmission rates.
  • panel connector 624 can be configured according to the industry standard fiber configurations. With this implementation, 8 fibers would be used for data transmission for 40GBASE- SR4 applications, or 10 fibers would be used for 100GBASE-SR10 with the remaining fibers in the MPO connector not configured to pass data.
  • the fourth port 630 includes a multi-port transceiver 632 and panel connectors 634, such as FC, SC, ST, LC, or other type of single or duplex fiber panel adapters, MPO, MXC, or other high density connectors, or any combination of these connectors.
  • the transceiver 632 connects to the panel connectors 634 via fiber interconnect 636.
  • the interconnect 636 may be an optical fiber cable, optical waveguide, or other mechanism to couple the optical signals between transceiver 632 and the front panel 34 (or rear panel 36) mounted fiber connectors 634.
  • This configuration is an example of combining multiple independent simplex or duplex optical fibers from a multi-port transceiver for connection to single fiber cables or to multi-fiber cables 678 (seen in Fig. 6). This permits aggregation of multiple independent fiber links into multiple connector types for delivery to a single or different endpoints or to be separated within patch panels, hydra cables, or other mechanisms to be distributed to different end destinations.
  • the fifth port 640 includes a multi-port transceiver (i.e., a transceiver with multiple connection ports) 642 and panel connectors 644, consisting of an MPO connector as well as FC, SC, ST, LC, or other type of single or duplex fiber panel adapters.
  • the transceiver 642 connects to the panel connectors 644 via fiber interconnect 646.
  • the interconnect 646 may be an optical fiber cable, optical waveguide, or other mechanism to couple the optical signals between transceiver 642 and the front panel (or rear panel) mounted fiber connectors 644.
  • This configuration is an example of combining industry standard 40Gbps and lOOGbps connections using lOGbps data rates per fiber and independent lOGbps fiber connections in the same transceiver 642.
  • the transceivers can bond four or 10 individual transceiver ports together as low skew transmission and receive groups of channels to form multi-fiber connections to a data center network device connected to the far end of the cable that is connected to connector 644. In this way, the transceiver can provide 40Gbps or lOOGbps transmission rates or transmission rates greater than lOOGbps.
  • panel connectors 644 can be configured with an MPO according to the industry standard fiber configurations plus additional connectors, such as FC, SC, ST, LC, or other type of single or duplex fiber panel adapters or an additional high density connector such as an MPO, MXC or other type to transport the remaining independent fiber links from transceiver 642.
  • additional connectors such as FC, SC, ST, LC, or other type of single or duplex fiber panel adapters or an additional high density connector such as an MPO, MXC or other type to transport the remaining independent fiber links from transceiver 642.
  • 8 fibers would be used for data transmission for 40GBASE-SR4 applications or 10 fibers would be used for 100GBASE-SR10 with the remaining fibers in the MPO connector not configured to pass data.
  • the sixth port 650 shown in Fig.
  • the 6F includes a transceiver 652 and a high density multi-fiber connector 654, such as an MPO, or other high density fiber connector.
  • the transceiver 652 connects to the panel connectors 654 via fiber interconnect 656.
  • the interconnect 656 may be an optical fiber cable, optical waveguide, or other mechanism to couple the optical signals between transceiver 652, and the front panel 34 (or rear panel 36) mounted fiber connectors 654. This configuration is an example of combining industry standard 40Gbps and lOOGbps connections using lOGbps data rates per fiber and independent lOGbps fiber connections in the same transceiver 652 and in the same panel connector 654.
  • the transceivers can bond four or ten individual transceiver ports together as low skew transmission and receive groups of channels to form multi-fiber connections to a data center network device connected to the far end of the cable that is connected to connector 654.
  • the transceiver can provide 40Gbps or lOOGbps transmission rates or transmission rates greater than lOOGbps.
  • the connector 546 can carry all the fiber
  • transceiver 652 This permits aggregation of 40GBASE-SR4 applications or 100GB ASE-SR10 along with independent fiber links for delivery to a single endpoint or to be separated within patch panels, hydra cables, or other mechanisms to be distributed to different end destinations.
  • the seventh port 660 shown in Fig. 6G, includes multiple transceiver modules 662 and a high density panel connector 664, such as an MPO, MXC, or other high density connector.
  • the transceiver modules 662 connect to the multi-fiber high density connector 664 via fiber interconnect 666.
  • the interconnect 666 may be an optical fiber cable, optical waveguide, or other mechanism to couple the optical signals between transceivers 662 and the front panel 34 (or rear panel 36) mounted fiber connectors 664.
  • This configuration is an example of combining multiple ports from one or more transceivers for connection to fiber connections in a single multi-fiber cable 666, and permits multiple simplex or duplex fiber, 40GBASE-SR4, 100GBASE-SR10, or other communications paths from one or more transceivers to a single high density connector 664 independent of each other.
  • This permits aggregation of multiple 40GBASE-SR4 applications, 100GBASE-SR10 along with independent fiber links for delivery to a single endpoint or to be separated within patch panels, hydra cables, or other mechanisms to be distributed to different end destinations.
  • high density MPO connectors can support up to 72 fibers and high density MXC connectors can support up to 64 fibers.
  • fiber cable group 686 (seen in Fig. 6) can fan out to as many transceivers as needed to support the desired fibers for the connector 664.
  • each transceiver is preferably a multiport transceiver that is built into data center network device 60 instead of the embodiment of Fig. 5 where the transceiver is plugged into an SFF cage.
  • Each transceiver is preferably dedicated for a particular industry standard application, such as a 40GBASE-SR4, 100GBASE-SR10 application, or can be individual ports configurable and either independent of one another or capable of being grouped together into a bonded high speed collection of fiber paths.
  • Each transceiver may physically consist of a single multiport transceiver, or may be a multiport transmitter component paired with a multiport receiver component.
  • multiport transceivers examples include the FBOTD10SL1C00 12-Lane Board-mount Optical Assembly manufactured by Finisar Corp.
  • multiport transmitter components and paired multiport receiver components include the AFBR-77D1SZ- Twelve-Channel Transmitter and AFBR-78D1SZ- Twelve-Channel Receiver
  • the transceivers may be configured in the housing 32 in a staggered arrangement away from the front panel 34 (or rear panel 36) such that the transceivers are not connected directly to the front panel 34 (or rear panel 36).
  • This configuration allows more connectors to be connected to the front panel (or rear panel) of the device 60, thus increasing the panel density of the device.
  • the panel density of the data center network device is further increased over the increased panel density provided by the embodiment of Fig. 5.
  • single transmission connections such as lGbps, 25Gbps, 56Gbps, or other transmission rates
  • WDM Wavelength Division Multiplexor
  • CWDM Coarse Wavelength Division Multiplexor
  • DWDM Dense Wavelength Division Multiplexor
  • a port as described herein is a component having a transceiver and connector, as described with reference to Fig. 5.
  • Fig. 5 For the embodiment of Fig.
  • a transceiver port relates to multiport transceivers where each transceiver port of the transceiver is independently capable of receiving a data stream from an external medium connected to the data center network device, and transmitting a data stream to the external medium connected to the data center network device.
  • a Network Interface Card (NIC) 70 is shown with a port configured by high density connector 702 and multiport transceiver 704.
  • the transceiver 704 may be a transceiver chip mounted to the NIC 70, or a pluggable transceiver and an SSF cage mounted to the NIC 70, or a separate transmitter and receiver mounted to the NIC 70.
  • the NIC is a plug-in card to a data center network device which provides an interface for the data center network device to interconnect to an external medium.
  • the NIC card contains the desired interface for a particular application, such as a copper Ethernet interface, Wi-Fi interface, serial port, Fibre Channel over Ethernet (FCoE) interface, or other media interface.
  • the NIC interconnects to the data center network device via a Peripheral Component Interconnect (PCI) Interface Connection 712, as one common device interconnect standard.
  • PCI Peripheral Component Interconnect
  • the data center network device CPU configures and controls the NIC via PCI interface logic 714 over PCI Interface bus 716.
  • each NIC card is designed for a specific application or
  • function block 708 provides control logic to convert the PCI Interface data stream format into a data stream format for transceiver 704 and vice versa.
  • the transceiver 704 provides the OSI Layer 1 physical layer interface for the external port 702 interface, while functional block 708 provides the OSI layer 2 processing for the external communications.
  • additional OSI Layer functions may also be included within the NIC card.
  • Transceiver 704 connects to the multi-fiber high density connector 702 via fiber interconnect 766.
  • the interconnect 766 may be an optical fiber cable, optical waveguide, or other mechanism to couple the optical signals between transceiver 704 and the NIC edge panel mounted fiber connectors 702.
  • the NIC can be installed within a data center network device to create a high density data center network device as described herein.
  • one transceiver 704 is shown on the NIC 70, but more than one transceiver module may be added to the NIC 70 similar to the embodiments shown in Figs. 5 and 6.
  • the ports can be configured to support individual lOGbps data rates, 40Gbps, or lOOGbps data rates, or data rates greater than lOOGbps, as described above.
  • the connections can be individual fiber connections, IEEE802.3ba 40GBASE-SR4 optical lane assignments, IEEE802.3ba 100GBASE-SR10 optical lane assignments, or may be dynamically configured by the data center network device CPU.
  • Each fiber connector may have one or more associated Light Emitting Diodes (LEDs) used for status and control information.
  • Each LED may be a single color or multicolor LED as determined for the product implementation.
  • Each LED may have a blink rate and color used to identify specific states for the port.
  • the LEDs can be illuminated by the data center network device CPU to indicate information, and may include port status for a single active port or multiple ports for each connector.
  • the LEDs can also be used during installation or Moves-Adds-and-Changes to indicate to data center personnel which connector port is to be serviced.
  • the data center network device CPU may also indicate port status information by a Liquid Crystal Display (LCD) located near the panel connectors.
  • LCD Liquid Crystal Display
  • FIG. 8 another embodiment of a data center network device 90 according to the present application is disclosed.
  • the data center network device is similar to the device described above with reference to Fig. 6 as well as the Network Interface card shown in Fig. 7, and permits the implementation of the capability to interpret cable information from cables connected to the data center network device 90, by obtaining intelligent information from within the cables.
  • adapters 920, 922, 924, 926 have the capability, via interface 906, to detect the presence of a cable connector 670, 674, 680, 684, 688, 970, 980, 984, 988, and others not shown, inserted into intelligent adapter 920, 922, 924, 926, and in the case of intelligence equipped cable connector 970, 980, 984, 988, and others not shown, read specific cable information by reading the information in cable media 910.
  • the data center network device 90 may be designed with ninth wire technologies interfaces, RFID tagging technology interfaces, connection point ID (CPID) technology interfaces, or other cable managed intelligence technologies.
  • the data center network device 90 may be designed with one or more of these different technology interfaces in order to provide the capabilities of supporting more than one particular managed intelligent technology.
  • Each data center network device 90 equipped with intelligent cable interfaces has the capability to determine the cable presence and/or cable information available to the interface depending upon the information provided from the intelligent cable.
  • the cable information read from media interface adapter 906 via media interface bus 904 by media reading interface 902 and provided to CPU 942 may include for each cable connection of the cable type, cable configuration, cable length, cable part number, cable serial number, and other information available to be read by media reading interface 902. This information is collected by media reading interface 902 and passed to the CPU 942 via control bus 944. The CPU 942 can use this information to determine end to end information regarding the overall communication path and the intermediary connections which make up an end-to-end path.
  • aspects of the present disclosure may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "module” or "system.”
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • object oriented programming language such as Java, Smalltalk, C++ or the like
  • conventional procedural programming languages such as the "C" programming language or similar programming languages.
  • a data center network device comprising: a housing having one or more connection panels; and a set of ports, wherein each port within the set of ports is configured to receive data streams from an external medium, and to transmit data streams to an external medium, wherein each port in the set of ports includes a connector and at least one transceiver optically coupled to the connector, and wherein the connector is mounted to the connection panel for connecting to external media, and the at least one transceiver is mounted within the housing such that the at least one transceiver is separated from the connector.
  • the data center network device according to claim 1, wherein the at least one transceiver comprises at least one multiport transceiver.
  • At least one of the plurality of transceivers comprise a combination of low density transceivers and high density transceivers.
  • the at least one transceiver comprises a pluggable transceiver and a cage for receiving the pluggable transceiver.
  • the data center network device according to claim 10, wherein the cage comprises an SSF cage.
  • the connector comprises a simplex, duplex, or high density fiber connector.
  • the high density fiber connector comprises MXC connectors.
  • the at least one transceiver comprises at least one multiport transceiver, and wherein each of the at least one multiport transceiver ports can be connected to individual fiber connections on the connector.
  • the at least one transceiver comprises at least one multiport transceiver and the connector comprises high density fiber connector, and wherein each of the multiport transceiver ports can be connected to the high density fiber connector with individual fiber connections.
  • the at least one transceiver comprises at least one multiport transceiver and the connector comprises IEEE 802.3ba 40GBASE-SR4 or 100GBASE-SR10 connector configurations, and wherein each of the multiport transceiver ports can be connected to the connectors
  • the at least one transceiver comprises at least one multiport transceiver and each of the multiport transceiver ports can be split between different connection panel connectors.
  • the at least one transceiver comprises at least one multiport transceiver and each of the multiport transceiver ports can be combined into a single high density connection panel connector.
  • the at least one transceiver comprises at least one multiport transceiver and connector comprises a plurality of connectors, wherein at least one of the plurality of connectors comprises an IEEE 802.3ba 40GBASE-SR4 or 100GBASE-SR10 connector configurations and at least one of the plurality of connectors comprises a high density fiber connector.
  • one or more of the ports in the set of ports comprise managed connectivity ports capable of reading a physical location identification from a managed connectivity port from an external medium connected to the one or more ports in the set of ports.
  • a network switch comprising: a housing having one or more connection panels; and a set of ports, wherein each port within the set of ports is configured to receive data streams from an external medium, and to transmit data streams to an external medium, wherein each port in the set of ports includes a connector and at least one transceiver optically coupled to the connector, and wherein the connector is mounted to the connection panel for connecting to external media, and the at least one transceiver is mounted within the housing such that the at least one transceiver is separated from the connector.
  • a network server comprising: a housing having one or more connection panels; and a set of ports, wherein each port within the set of ports is configured to receive data streams from an external medium, and to transmit data streams to an external medium, wherein each port in the set of ports includes a connector and at least one transceiver optically coupled to the connector, and wherein the connector is mounted to the connection panel for connecting to external media, and the at least one transceiver is mounted within the housing such that the at least one transceiver is separated from the connector.
  • a network storage device comprising: a housing having one or more connection panels; and a set of ports, wherein each port within the set of ports is configured to receive data streams from an external medium, and to transmit data streams to an external medium, wherein each port in the set of ports includes a connector and at least one transceiver optically coupled to the connector, and wherein the connector is mounted to the connection panel for connecting to external media, and the at least one transceiver is mounted within the housing such that the at least one transceiver is separated from the connector.
  • a network router compri sing : a housing having one or more connection panels; and a set of ports, wherein each port within the set of ports is configured to receive data streams from an external medium, and to transmit data streams to an external medium, wherein each port in the set of ports includes a connector and at least one transceiver optically coupled to the connector, and wherein the connector is mounted to the connection panel for connecting to external media, and the at least one transceiver is mounted within the housing such that the at least one transceiver is separated from the connector.
  • a network NIC card comprising: a housing having one or more connection panels; and a set of ports, wherein each port within the set of ports is configured to receive data streams from an external medium, and to transmit data streams to an external medium, wherein each port in the set of ports includes a connector and at least one transceiver optically coupled to the connector, and wherein the connector is mounted to the connection panel for connecting to external media, and the at least one transceiver is mounted within the housing such that the at least one transceiver is separated from the connector.
  • a data center network device provides configurations where the port density can be increased by incorporating multiport transceivers within the device and the use of high density fiber connections on exterior panels of the device.
  • the device also permits dynamically reassigning fiber connections to convert from single fiber connection paths to higher rate bonded fiber paths while at the same time making more efficient use of the fiber interconnections.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Small-Scale Networks (AREA)

Abstract

The present disclosure relates to data center architectures that implement high density connectors, low density connectors and/or combinations of high and low density connectors directly into endpoint network devices, such as servers, storage devices and any other endpoint network devices, as well as network interface cards that may be plugged into such data center endpoint network devices, thus simplifying cable interconnections between endpoint destinations and intermediary interconnect panels and cross connect panels, as well as to reducing the number of switches required within the data center network.

Description

Patent Application for DATA CENTER ENDPOINT NETWORK DEVICE WITH BUILT IN SWITCH
CROSS-REFERENCES TO RELATED APPLICATIONS
[0001] This application claims priority to co-pending U.S. Provisional Application No. 62/145,352, filed April 9, 2015, titled, "Data Center Endpoint Network Device with Built in Switch," which is hereby incorporated herein in its entirety by reference.
BACKGROUND [0002] Field
[0003] The present disclosure relates generally to network equipment typically used in data centers, and more particularly to endpoint network devices with increased port density and efficiency.
[0004] Description of the Related Art
[0005] Data center network architectures are generally considered as static configurations, such that once a data center network is built out, the main architecture does not change and there are relatively few changes made to the data center network. This is because each architectural modification or change requires sending personnel to the data center to manually move components or equipment, and/or to change interconnections between the components or equipment within the data center, or to reprogram equipment in the data center. Each architectural modification or change to the data center network incurs cost, sometimes significant cost, increases the risk of errors in the new data center network architecture, and increases the risk of failures resulting from the architectural modification or change. [0006] Conceptually, a typical large data center network architecture 10 has network endpoint devices, such as servers 12 and storage devices 20, connected to an aggregation layer of data center network switches 14. The aggregation layer switches 14, often referred to as top-of-rack (TOR) switches, are in turn connected to a distribution layer of data center network switches 16. The distribution layer of switches 16, often configured as end-of-row (EOR) switches, are in turn connected to a core data center network switch 18 forming a core switching layer. Fig. 1 illustrates such an architecture. Smaller data center network architectures may only have a two tier architecture, as opposed to the three tier architecture used in the larger data center networks seen in Fig. 1. Typically, the number of tiers (or levels) is dependent upon the number of data center network endpoint devices 12 and 20, and the port capacity of each switch at each level. As a result, the interconnect architecture becomes limited to the number of ports on each switch at each level. The switches used for the aggregation layer can be either switches capable of connecting one server to another server that is directly connected to the data center network switch 14, or the switches 14 may be aggregation type switches where all data traffic is concentrated within one or more switches 14, and then transferred to a distribution layer switch 16 for switching to endpoint destinations.
[0007] From a logical perspective, traditional data center networks, as shown in Fig. 2, consist of servers 104 and storage devices 106, plus connections between the servers, storage devices and to external interfaces. A data center interconnects these devices by means of a switching topology implemented by pathway controlling devices 130, such as switches and routers. As networks grow in size, so does the complexity. The servers 104 and storage devices 106 connect to one another via cable interfaces 118, 120, 122, and 124. Interconnects 112 are used to bundle and reconfigure cable connections between endpoints in cable bundles 114, 116, and 126. As can be seen in Fig. 2, data center networks become layered with multiple pathway controlling devices 130 in an attempt for every endpoint to have the capability of switching and/or routing data packets to any other endpoint within the data center network. This can result in very complex hierarchical switching networks which in turn require considerable power and expense in order to maintain and respond to configuration changes within the network. [0008] From a physical perspective, a typical data center network configuration, shown in Fig. 3, consists of multiple rows of cabinets, where each cabinet encloses a rack of one or more network devices, e.g., switches 102, servers 104 and storage devices 106. Typically, for each rack there is a top-of-rack (TOR) switch 102 that consolidates data packet traffic in the rack via cables 140 and transports the data packet traffic to a switch known as an end-of-row (EOR) switch 108 via cables 142. The EOR switch is typically larger than a TOR switch, and it processes data packets and switches or routes the data packets to a final destination or to a next stage in the data center network, which in turn may process the data packets for transmission outside the data center network. Typically, there are two TOR switches 102 for every rack TOR switch 102A and TOR switches 102B (not shown), and two EOR switches 108 A and 108B for each row, where the second switch in each case is typically for redundancy purposes.
[0009] In one configuration, a TOR switch 102 will switch data packet traffic directly between any two network devices, e.g., servers 104 or storage devices 106, within a given rack. Any data packet traffic destined for locations outside of the rack are sent to the EOR switch 108. The EOR switch 108 will send data packet traffic destined for a network device in a different rack in the same row to the TOR switch 102 of the rack where the network device resides. The TOR switch 102 within the destination rack will then forward the data packet traffic to the intended network device, i.e., the destination device. If the data packet traffic is for network devices outside of the row, e.g., Row 1, the EOR switch 108 will forward the traffic to core switch 110 for further transmission.
[00010] In other configurations, a TOR switch 102 may be used simply as an aggregator, where the data packet traffic is collected and forwarded to an EOR switch 108. The EOR switch then determines the location of the destination network device, and routes the data packet traffic back to the same TOR switch 102 if the data packet traffic is destined for a network device in that rack, to a different TOR switch 102 in a different rack if the traffic is destined for a network device in a different rack in the same row, or to the core switch 110 if the destination of the data packet traffic is outside of that row. [00011] The TOR switch 102 may couple the entire data packet traffic from an ingress port to an egress port, or may selectively select individual packets to send to an egress port. Referring to Fig. 4, in conventional applications, a TOR switch 102 retrieves header information of an incoming data packets on an ingress port 144 of the TOR switch, and then performs Access Control List (ACL) functions to determine if a packet has permission to pass through the TOR switch 102. Next, a check is run to see if a connection path was previously based on the information from within the packet header. If not, then TOR switch 102 may run Open Shortest Path First (OSPF), Border Gateway Protocol (BGP), Routing Information Protocol (RIP), or other algorithms to determine if the destination port is reachable by the TOR switch 102. If the TOR switch 102 cannot create a route to the destination network device, the packet is dropped. If the destination network device is reachable, the TOR switch 102 creates a new table entry with the egress port number, corresponding egress header information, and forwards the data packet to the egress port 146. Using this methodology, the TOR switch 102 transfers, or switches, the data packet from the ingress port 144 to the required egress port 146.
[00012] Redundancy in a data center network is typically implemented by having a primary and a secondary path at each stage in the network. Referring again to Fig. 1, data center network server 12A has two connections from the server to the Aggregation Layer switches designated here as Path A to data center network switch 14A and Path B to data center network switch 14B. To transmit data traffic to any other data center network device, data center network server 12A can only transmit the traffic over Path A or Path B. Typically, Path A and Path B are configured as a primary and redundant path, where all the data traffic will pass through Path A to data center Switch 14A unless there is a failure in the data center network server 12A Path A transceiver within the data center network switch 14 A, or within the cable interconnections between data center network server 12A and data center network switch 14A. In this scenario, all the data traffic will be transferred over to Path B. Similar configurations exist for data center network storage devices 20.
[00013] As noted above, traditional data center network devices, such as servers, storage devices, switches, and routers, as well as Network Interface Cards (NICs) that may be added to such network devices have physical connection points to transmit and receive data. These connection points generally include a transceiver and a connector, which are often referred to as a port. Ports can be copper or fiber ports that are built into the device, or the ports can be plug-in modules that contain the transceiver and connector, and that plug into Small Form Factor (SFF) cages intended to accept the plug-in transceiver/connector module. Examples of plug-in transceiver/connector modules, include SFP, SFP+, QSFP, CFP, CXP, and other transceiver/connector modules, where the connector extends from an exterior surface of the device, e.g., from a front panel. Copper ports may consist of RJ45 ports supporting Category 5, Category 5E, Category 6, Category 6A, Category 7 or other types of copper interfaces. The fiber ports may be low density or single fiber ports, such as FC, SC, ST, LC, or the fiber ports may be higher density MPO, MXC, or other high density fiber ports.
[00014] Fiber optic cabling with the low density FC, SC, ST, or LC connectors or with SFP, SFP+, QSFP, CFP, CXP or other modules either connect directly to the data center network devices, or they pass through interconnector cross connect patch panels before getting to the data center network devices. The cross connect patch panels have equivalent low density FC, SC, ST, or LC connectors, and may aggregate individual fiber strands into high density MPO, MXC or other connectors that are primarily intended to reduce the quantity of smaller cables run to alternate panels or locations.
[00015] Fig. 5 illustrates details of a conventional data center network switch (or router) 14 or 16 with SFF cages 210 mounted within the network switch 14 or 16 typically to a front or rear panel of the network switch enclosure. External transceiver/connector modules 220 can be inserted into SFF cages 210. The SFF cage 210, transceiver 220 and connector 222 forming the port 228. CPU 202 configures switch logic 204 to control data streams through the switch 14 or 16 via paths 208 the port 228, i.e., transceiver 220 and connector 222. As noted above, the ports 228 may be copper or fiber ports. In this configuration, connectors 222 can consist of either single copper RJ-45 connectors, single or duplex fiber connectors such as FC, SC, ST, or LC connectors, or multi-fiber connectors, such as MPO or MXC multifiber connectors. Connectors 226 plug into the connectors 222, and cables 224 are used to communicate with other network devices. If the port 228 is a copper port, then cable 224 would be a copper cable that is terminated with an RJ-45 connector used as connector 226. If the port 228 is a simplex or duplex fiber port, then cable 224 would be a single fiber cable terminated with an FC, SC, ST, or LC connector as connector 226. If the port 228 is a high density fiber port, then cable 224 would be a high density fiber cable terminated with MPO or MXC connector as connector 226.
[00016] Fig. 6 shows a conventional data center network server 12 having a CPU 250 and associated logic and memory controlling the functionality of the server as is known in the art. The server 12 may also include a video port interface 252, a USB port interface 254, a keyboard port interface 256, a mouse port interface 258, and a craft or RS232 port interface 260. Typically, multiple fans 262 provide cooling, and redundant power supplies 264 provide power to the server 12. For communicating with different data center network devices, conventional servers 12 use a network interface 270. Typically, the network interface 270 has two network ports 228. The first port is the primary port for communicating with other data center network devices, and the second port is the redundancy port used for communicating with other data center network devices when the first port is not operational. The two network ports are usually on a single Network Interface Card (NIC), but each port (i.e., the primary and secondary ports) may be on separate NIC cards for further redundancy. Using a plug-in NIC card permits different variations of copper or fiber network ports to be used with the server 12, where the variation used depends upon the particular data center network configuration.
[00017] Fig. 7 shows a conventional two port network interface 270 used in conventional data center network server 12. Each port 228 in the two port network interface has an SFF cage 210 mounted within the server 12, typically to a front or rear panel of the data center network server enclosure. An external transceiver/connector module 220 can then be inserted into the SFF cage 210. CPU 250 communicates with the network link via cable 224 using Ethernet protocol or other communication protocols, shown in Fig. 7 as Ethernet logic 272. In this configuration, connectors 226 can consist of either single copper RJ-45 connectors, or single or duplex fiber connectors. The transceiver 220 may be a copper or fiber port. If the two transceivers 220 are copper ports, then cable 224 would be a copper cable that is terminated with an RJ-45 connector used as connector 226. If the two transceivers 220 are simplex or duplex fiber ports, then cable 224 would be a single fiber cable terminated with an FC, SC, ST, or LC connector as connector 226. Except for switches and routers, conventional network devices, e.g., servers, storage devices, and cross connects, typically do not include multi-fiber ports.
[00018] As noted above, a conventional NIC may be used by a server (or other data center network endpoint device) as the network interface 270 to communicate with different data center network devices. Fig. 8 illustrates a conventional Network Interface Card (NIC) 300 that can be used for such a purpose. The NIC 300 is a plug-in card that provides an interface for the data center network endpoint device to interconnect with an external medium, such as cable 224 and connector 226. In some implementations, a NIC card 300 may have a single port 228. In other cases, the NIC card 300 may have dual ports 228, with the second ports primarily as redundant or alternate path. The NIC contains the desired interface for a particular application, such as a copper Ethernet interface, Wi-Fi interface, serial port interface, Fiber Channel over Ethernet (FCoE) interface, or other media interface. In Fig. 8, the NIC communicates with the data center endpoint network devices it is plugged into via a Peripheral Component Interconnect (PCI) interface connection 308. The PCI interface connection 308 is a common network device interconnect standard that plugs into a bus in the data center network device (not shown). In other server designs, a different local bus other than PCI bus may be implemented. The PCI interface connection 308 communicates with PCI interface logic 304 via PCI interface bus 306. The data center endpoint network device CPU (not shown) configures and controls the NIC by communicating with the PCI interface logic 304 through the PCI interface connection 308 and the PCI Interface bus 306. Generally, each NIC card is designed for a specific implementation. In the configuration of Fig. 8, communication module 302 acts as control logic to convert the PCI interface 304 data stream format into a network switch data stream format for the port 228. The transceiver 220 provides an OSI Layer 1 physical layer interface for the external medium, i.e., cable 224 and connector 226, while the communication module 302 provides OSI layer 2 processing for the external communications. Depending upon the NIC implementation, additional OSI layer functions may also be included within the NIC. [00019] Today, data centers like those described above are built within an enterprise network, a service provider network, or a shared, colocation facility where the networks of many disparate owners reside. With the significant increase in business and individual use of the Internet, and the significant need for bandwidth to transmit high volumes of data, especially video and graphics, data centers have become extremely complex and are under pressure to evolve to handle the boom in growth. Data centers like those described above are typically very expensive to build, operate and maintain, and as the complexity increases so do the costs to build, operate and maintain data centers. Moreover, data centers like those described above have layers of switches to process data streams. Each switching operation creates delays in data transmissions. As a result, data center operators are searching for ways to reduce costs and increasing data processing and transmission capabilities, while meeting all reliability requirements. The present disclosure provides a means by which data center operators can reduce costs, increase data transmission rates, while meeting or exceeding all reliability requirements.
BRIEF SUMMARY
[00020] The present disclosure relates to data center architectures that implement high density connectors, low density connectors and/or combinations of high and low density connectors directly into data center endpoint network devices, such as servers, and storage devices, and any other endpoint network devices, as well as NIC cards that may be plugged into such data center endpoint network devices, thus simplifying cable interconnections between endpoint destinations and intermediary interconnect panels and cross connect panels, as well as to reducing the number of switches required within the data center network.
[00021] Using high density fiber ports, low density ports and combinations of high density and low density ports as disclosed in the present disclosure, provides within the same footprint of an area occupied by conventional an SFP+ connector/transceiver configurations, a single MPO solution which can be implemented to support up to 72 fibers. [00022] By introducing multiport transceivers into endpoint network devices as disclosed herein, the present disclosure introduces new methods for increasing the density of optical interface circuitry within endpoint network devices, such as servers, storage, and network interface cards, in order to implement switching capabilities inside such endpoint network devices and eliminate a layer of data center network switches.
[00023] Port configurations disclosed in the present disclosure also permits discovery of end-to end connectivity through the use of managed connectivity cable methods, such as 9^ wire, connection point ID (CPID), and other methods within such endpoint network devices. Knowledge of the end to end physical configurations in every path, including the discovery of per port path connectivity permits data center management on a per port and per cable connector basis, including the ability to identify changes in state of a physical connection in real time.
[00024] In an exemplary embodiment of an endpoint network device according to the present disclosure, the endpoint network device includes a central processing unit, and a network interface in communication with the central processing unit. The network interface includes at least one port configured to receive data streams from an external medium and to transmit data streams to an external medium, and a communication module that is capable of converting received data streams from a network protocol to a data format capable of being processed by the central processing unit, and capable of converting data in the data format processed by the central processing unit to a network protocol such that the network interface can transmit data streams. The at least one port may include a set of ports, wherein each port in the set of ports includes a connector and at least one transceiver optically coupled to the connector.
[00025] The endpoint network device of this exemplary embodiment may further include an enclosure housing the central processing unit and the network interface, where the connector is mounted to a panel of the enclosure for connecting to external media. Preferably, the at least one transceiver is mounted within the housing such that the at least one transceiver is separated from the connector. The connector can be optically coupled to the at least one transceiver using at least one fiber cable. [00026] In some embodiments, the at least one transceiver includes at least one multiport transceiver, and the connector includes a simplex, duplex, or high density fiber connector. Examples of high density fiber connectors include MPO and MXC connectors. In some embodiments, the at least one transceiver includes at least one multiport transceiver, and each of the at least one multiport transceiver ports can be connected to individual fiber connections on the connector. In some embodiments, the at least one transceiver includes at least one multiport transceiver and the connector is a high density fiber connector, where each of the multiport transceiver ports can be connected to the high density fiber connector with individual fiber connections.
[00027] In some embodiments, the at least one transceiver includes at least one multiport transceiver, and each of the multiport transceiver ports can be configured as a redundant path connection. In some embodiments, the at least one transceiver includes at least one multiport transceiver, and each of the multiport transceiver ports can be configured as an alternate path connection permitting data streams to be automatically switched under the central processing control to different endpoints. In some embodiments, the at least one transceiver includes at least one multiport transceiver, and each of the multiport transceiver ports can be configured as switching an input data stream to a different port on an outgoing transceiver port without terminating the data stream on the endpoint network device. In some embodiments, the at least one transceiver includes at least one multiport transceiver, and each of the multiport transceiver ports can be configured as switching an input data stream to multiple different ports on outgoing transceiver ports for multicast or broadcast without terminating the data stream on the endpoint network device.
[00028] In some embodiments of the endpoint network device according to the present disclosure, one or more of the ports in the set of ports includes managed connectivity ports capable of reading a physical location identification from a managed connectivity port of an external medium connected to the one or more ports in the set of ports.
[00029] In another exemplary embodiment of an endpoint network device according to the present disclosure, the endpoint network device includes a central processing unit, and a network interface having at least one port configured to receive data streams from an external medium and to transmit data streams to an external medium, and a communication module in communication with the at least one port and with the central processing unit. Preferably, the at least one port in this embodiment includes a multi- fiber connector, a multiport transceiver and a multi-fiber interconnect cable connecting the multi-fiber connector to the multiport transceiver. The communication module is preferably capable of switching data streams received on one fiber of the multi-fiber connector that pass through one fiber of the multi-fiber interconnect cable to the multiport transceiver, to the multiport transceiver through a second fiber of the multi- fiber interconnect cable to a second fiber of the multi-fiber connector in response to instructions from the central processing unit.
[00030] An exemplary embodiment of a data center network architecture according to the present disclosure includes at least one cluster of endpoint network devices, a distribution layer of network switches, and a core switching layer. In some embodiments, each endpoint network device includes a central processing unit, and a network interface in communication with the central processing unit. The network interface may include at least one port configured to receive data streams from an external medium and to transmit data streams to an external medium, and a communication module that is capable of converting received data streams from a network protocol to a data format capable of being processed by the central processing unit, and capable of converting data in the data format processed by the central processing unit to a network protocol such that the network interface can transmit data streams.
[00031] Another exemplary embodiment of a data center network architecture according to the present disclosure includes at least one cluster of endpoint network devices a distribution layer of high density path switches, and a core switching layer. In this exemplary embodiment, each endpoint network device may include a central processing unit, and a network interface. The network interface may include at least one port configured to receive data streams from an external medium and to transmit data streams to an external medium, wherein the at least one port includes a multi-fiber connector, a multiport transceiver and a multi-fiber interconnect cable connecting the multi-fiber connector to the multiport transceiver, and a communication module in communication with the at least one port and with the central processing unit, and capable of switching data streams received on one fiber of the multi-fiber connector that pass through one fiber of the multi-fiber interconnect cable to the multiport transceiver, to the multiport transceiver through a second fiber of the multi-fiber interconnect cable to a second fiber of the multi-fiber connector in response to instructions from the central processing unit.
[00032] Another exemplary embodiment of a data center network architecture according to the present disclosure includes at least one cluster of endpoint network devices and a distribution layer of high density path switches. In some embodiments, each endpoint network device may include a central processing unit, and a network interface in communication with the central processing unit and having at least one port configured to receive data streams from an external medium and to transmit data streams to an external medium, and a communication module that is capable of converting received data streams from a network protocol to a data format capable of being processed by the central processing unit, and capable of converting data in the data format processed by the central processing unit to a network protocol such that the network interface can transmit data streams.
[00033] Another exemplary embodiment of a data center network architecture according to the present disclosure includes at least one cluster of endpoint network devices and a distribution layer of high density path switches. In this exemplary embodiment, each endpoint network device may include a central processing unit and a network interface. In some embodiments, the network interface includes at least one port configured to receive data streams from an external medium and to transmit data streams to an external medium, wherein the at least one port includes a multi-fiber connector, a multiport transceiver and a multi-fiber interconnect cable connecting the multi-fiber connector to the multiport transceiver, and a communication module in communication with the at least one port and with the central processing unit, and capable of switching data streams received on one fiber of the multi-fiber connector that pass through one fiber of the multi- fiber interconnect cable to the multiport transceiver, to the multiport transceiver through a second fiber of the multi-fiber interconnect cable to a second fiber of the multi-fiber connector in response to instructions from the central processing unit.
BRIEF DESCRIPTION OF THE DRAWINGS
[00034] Fig. 1 is a block diagram of a conventional data center network architecture illustrating a three tier switching architecture;
[00035] Fig. 2 is a block diagram of a conventional logical data center network topology;
[00036] Fig. 3 is a block diagram of a row architecture in a conventional data center network;
[00037] Fig 4 is a flow diagram for a top of rack switch in a conventional data center network;
[00038] Fig. 5 is a block diagram of a conventional data center network switch architecture with ports having external transceivers insertable into SFF cages;
[00039] Fig. 6 is a block diagram of a conventional data center network server, illustrating in part a network interface in communication with a CPU;
[00040] Fig. 7 is a block diagram of a conventional data center network interface of Fig. 6, illustrating two ports each with an external transceiver insertable into an SFF cage;
[00041] Fig. 8 is a block diagram of a conventional data center network interface of Fig. 6, illustrating a NIC with a port having an external transceiver insertable into an SFF cage;
[00042] Fig. 9 is a block diagram of an embodiment of a data center network device according to the present disclosure, illustrating a server with a built in network switch;
[00043] Fig. 10 is a block diagram of an embodiment of the built in network switch of Fig. 9; [00044] Fig. 11 is a block diagram of another embodiment of the built in network switch of Fig. 9;
[00045] Fig. 12 is a block diagram of another embodiment of the built in network switch of Fig. 9.
[00046] Fig. 13 is a block diagram of multiple endpoint network devices with various embodiments of the built in network switch of Fig. 9;
[00047] Fig. 14 is a block diagram of an exemplary embodiment of a data center network architecture according to the present disclosure, illustrating a two tier switching architecture;
[00048] Fig. 15 is a block diagram of another exemplary embodiment of a data center network architecture according to the present disclosure, illustrating a two tier switching architecture with high density path switches in a distribution layer; and
[00049] Fig. 16 is a block diagram of another exemplary embodiment of a data center network architecture according to the present disclosure, illustrating a single tier switching architecture with high density path switches as a distribution layer.
DETAILED DESCRIPTION
[00050] For the purpose of this disclosure a data center network device includes servers, storage devices, cross connect panels, network switches, routers and other datacenter network devices. For the purpose of this disclosure a data center endpoint network device includes servers, storage devices and other network devices, but does not include cross connect panels, network switches or routers.
[00051] Referring now to Fig. 9, an exemplary endpoint network device according to the present disclosure is shown. In this embodiment, the endpoint network device is a network server 400. The network server 400 has a CPU 410 and associated logic, e.g., interface logic 412 and control logic 414, and memory, e.g., memory modules 416 and hard disk 418, controlling the functionality of the network server 400, as is known in the art. The network server 400 may also include a video port interface 420, a USB port interface 422, a keyboard port interface 424, a mouse port interface 426, and a craft or RS232 port interface 428 that communicate with the CPU 410 via interface logic 412. Typically, multiple fans 430 provide cooling, and redundant power supplies 432 provide power to the network server 400. For communicating with different endpoint network devices, network server 400 uses a network interface 402. The network interface 402 has a port 440 and a communication module 450. The port 440 includes a multiport transceiver 442, multiport connector 444, and multi-fiber interconnection cable 446.
[00052] The communication module 450 along with port 440 permit the CPU 410 to direct traffic (e.g., data packets) from CPU 410 to one or more of the multi-fiber interconnection cable 446 connected to multiport transceiver 442. The CPU 410 configures the communication module 450, and thus the flow of data streams from the CPU to one or more interconnects within multi-fiber interconnect cable 446 within the port 440.
[00053] Fig. 10 illustrates the network interface 402 of Fig. 9 in more detail, namely providing more detail of an exemplary embodiment of the communication module 450. In this exemplary embodiment, the CPU 410 communicates to external devices via a known communication protocols, such as the Ethernet protocol, and passes data to protocol logic 452, here Ethernet protocol, over CPU Interface 460 which converts the data into Ethernet protocol packets and passes the Ethernet packets to switch logic 454 over interface 456. CPU 410 configures switch logic 454 to direct the Ethernet packets from Ethernet logic 452 to one or more multi-fiber interconnect cable 446 via multiport transceiver 442 in port 440. In the case of point-to-point connections, the CPU 410 can direct the Ethernet packets to a single outgoing interconnect (or fiber) of multi-fiber interconnect cable 446 via transceiver 442 in port 440. In the case of a multicast, broadcast, port mirroring, test port access, or other application, CPU 410 may direct switch logic 454 to transmit the Ethernet packets to multiple fibers in interconnect cable 446 via transceiver 442 in port 440. [00054] While the above protocol logic is described as Ethernet protocol logic 452, the present disclosure contemplates using other known communication protocols and associated logic, for example, Fiber Channel protocol logic. Additionally, because the switch can cross connect a receive signal stream from one transceiver port to an outgoing transceiver port without passing through the Ethernet logic circuitry, the signal stream may contain any form of serial bit stream including encrypted data formats. The transceiver ports and switch logic can pass this signal stream unaffected from input to output without knowing the signal stream structure or contents.
[00055] Another embodiment of the present disclosure is shown in Fig. 11. In this embodiment, the network interface is included on a Network Interface Card (NIC) 500. The network interface has a port 520, communications module 501 consisting of switch logic 504, and protocol logic 502 mounted on the NIC 500. The port 520 includes multiport transceiver 522, multiport connector 524 and multiport interconnect cable 526. Port 520 is similar to port 440 described above.
[00056] The NIC 500 can be installed within an endpoint network device to create a high density endpoint network device, as described herein. It should be noted that while the port 520 with one multiport transceiver 522, multiport connector 524 and multi-fiber interconnect cable 526 is shown, the present disclosure contemplates having a NIC with more than one such port with multiport transceivers in numerous configurations.
[00057] Referring to Fig. 12, another exemplary embodiment of a data center network interface card is shown. In this embodiment, the NIC 700 is similar to the NIC 500 of Fig. 11 and further includes the implementation of cable identification functionality. The NIC 700 includes a network interface that includes port 710, communication module 730, media reading interface 750, and PCI interface 770. The port 710 includes a multiport transceiver 712, multiport connector 714, and multi-fiber interconnection cable 716. The communication module 730 includes switch logic 732, which is similar to switch logic 454 (seen in Fig. 10) and 504 (seen in Fig. 11), and protocol logic 734, which is similar to protocol logic 452 (seen in Fig. 10) and 502 (seen in Fig. 11). It should be noted that while the port 710 with one multiport transceiver 712, multiport connector 714 and multi- fiber interconnect cable 716 is shown, the present disclosure contemplates having a NIC with more than one such port with multiport transceivers in numerous configurations. The NIC 700 also includes an adapter 718 within connector 714 that has the capability to detect the presence of adapter 722 within cable connector 720 to read specific cable information via media reading interface 750. To read the cable information, the media reading interface 750 and adapters 718 and 722 may be designed with ninth wire technologies interfaces, RFID tagging technology interfaces, connection point ID (CPID) technology interfaces, or other cable managed intelligence technologies. In another embodiment, the data center network NIC 700 may be designed with one or more of these different technology interfaces in order to provide the capabilities of supporting more than one particular managed intelligent technology.
[00058] Each data center network NIC 700 equipped with intelligent cable interfaces has the capability to determine the cable presence and/or cable information available to the media reading interface 750 depending upon the information provided from the intelligent cable.
[00059] The cable information read from media interface adapter 718 via media reading interface bus 752 by media reading interface 750, and provided to the endpoint network device CPU, via PCI Interface 770 and PCI interface connection 772, may include for each cable connection of the cable type, cable configuration, cable length, cable part number, cable serial number, and other available information. The endpoint network device CPU can use this information to determine end to end information regarding the overall communication path and the intermediary connections that make up an end-to-end path.
[00060] Referring again to the embodiments of Figs. 10, 11 and 12, the fibers from the multi-fiber interconnect cables 446, 526 or 716 can be implemented as independent connections, each to the same or to different endpoint destinations. Switch logic 454, 504, or 732 can be programmed or reprogrammed in real time to route traffic (e.g., data packets or complete data paths) over one interconnect in interconnect cable 446 via transceiver 442, or over one interconnect in interconnect cable 526 via transceiver 522, or over one interconnect in interconnect cable 716 via transceiver 712, to the same or to different endpoint destinations. In combination with or as an alternative embodiment, switch logic 454, 504, or 732 can also be programmed or reprogrammed in real time to route traffic (e.g., data packets or complete data paths) over multiple interconnects in interconnect cable 446 via transceiver 442, or over multiple interconnects in interconnect cable 526 via transceiver 522, or over one interconnect in interconnect cable 716 via transceiver 712, to the same or to different endpoint destinations. Such configurations, provide fast, accurate switchover from one network configuration to a different network configuration with no manual physical reconnections. Examples of multiport transmitter components and paired multiport receiver components for the transceivers include the AFBR-77D1SZ- Twelve-Channel Transmitter and AFBR-78D1SZ- Twelve-Channel Receiver manufactured by Avago Technologies.
[00061] Providing one or more ports with multiport transceivers and associated switch logic on a NIC adds new capabilities to endpoint network devices. For example, providing one or more ports with multiport transceivers and associated switch logic on a NIC provides capabilities of adding redundant paths and/or alternate paths for the transmission of traffic. Adding additional port capability to the NIC card permits redundant paths to be set up such that a failure of a primary path permits the switch logic to reconfigure the transceiver so that the data transmission and reception can occur over a different interconnect within the interconnection cable, e.g., cables 446, 526 or 716. Using multiport transceivers permits the use of multiple redundant paths. Using multiport transceivers also permits the creation of dedicated physical ports between the NIC and different endpoint destinations. For example, the switch logic can be configured to transmit a data stream to one endpoint destination via one transceiver port within transceiver 442, 522 or 712, while another data stream may be transmitted via another transceiver port within transceiver 442, 522 or 712 to a different endpoint destination over a different interconnect (or path). This permits the endpoint network device to send data streams directly to multiple different destinations rather than sending the data streams to a network switch, which in turn routes the traffic to the end destination. By including this capability within the endpoint network device or on the NIC installed in an endpoint network device, the data center network device can reduce the transmission time of the connection by eliminating the network switch preprocessing and routing transfer time in order to connect to two or more different end points.
[00062] The ports that may be used in the set of ports contemplated by the present disclosure may vary. For the purpose of this disclosure, a port includes any of the port types described herein, but this disclosure is not intended to limit the ports contemplated herein, which are provided as exemplary embodiments for the ports that may be used. The port types may include different connector implementations - such as an FC, SC, ST, LC, or other type of single or duplex fiber connector, or a high density port such as an MPO, MXC or other high density multi-fiber panel connector.
[00063] Individual ports can be dynamically bonded together to create higher bandwidth ports such as bonding four lOGbps ports to form a single 40Gbps interface port, ten lOGbps ports to create a single lOOGbps interface port, four 25Gbps ports to create a single lOOGbps interface port, or other combinations of ports to form multifiber connections between data center network devices. This capability enables data centers to dynamically scale from using data center network devices that operate using lOGbps ports to data center network devices that operate using 40Gbps, lOOGbps ports, or ports with data rates greater than lOOGbps. Further, the ports of the present disclosure permit the use of all fibers in the IEEE802.3ba 40GBASE-SR4 optical lane assignments, ffiEE802.3ba 100GBASE-SR10, or IEEE802.3ba 100GBASE-SR4 optical lane assignments within the connector and allow data center network devices, e.g., interconnect panels and switches, to separate individual links from bonded links. This also permits the expansion of high density fiber configurations, e.g., 12 fiber MPO configurations, to 24, 48, 72, or greater high density fiber combinations in order to support multi-rate and multi-fiber applications in the same connector. This capability also permits the expansion of high density fiber configuration, e.g., 12 fiber MPO configurations, to MXC or other high fiber count configurations without the need for predefined bonding for multi-fiber applications in the same connector. The network interface 402, 500 or 700 can be configured to terminate multifiber bonded ports such as 40Gbps or lOOGbps. [00064] Additionally, by utilizing endpoint network devices, such as servers and storage devices, to bond and un-bond fiber pairs, the endpoint network device according to the present disclosure can create bonded pairs that traverse multiple connectors. Switch logic 454, 504 or 732 can be configured to support redundant or alternate path connections for multifiber bonded ports such as 40Gbps or lOOGbps in a single multiport transceiver 442, 522 or 712, or to alternate multiport transceivers 442, 522 or 712 on the network interface port 440, 520 or 710. In most cases for this type of application, the two or more separate paths can be configured such that the connection medium is the same, and the overall length of each path is substantially the same to minimize differential delays.
[00065] A further capability of implementing switch functionality on an endpoint network device, e.g., a network server, storage device or other end point device, is to add the switching functionality within that endpoint network device to create standalone switch applications. Fig. 13 is an exemplary embodiment of multiple endpoint network devices each having a network interface according to the present disclosure, e.g. connecting Endpoint Network Device 2 and Endpoint Network Device 3 to Endpoint Network Device 1, which switches the signal from Endpoint Network Device 2 to Endpoint Network Device 3. In the exemplary embodiment of Fig. 13, network interface 402 of Fig. 10 is deployed in Endpoint Network Device 1. Endpoint Network Device 2 and 3 may have network interface 402 which will provide switching also at the Endpoint Network Device 2 or 3, or may have traditional network interface 270 which does not have any switching capabilities at the endpoint. In an alternate embodiment, the NIC 500 of Fig. 11 or the NIC 700 of Fig. 12 may be used in one or more of the Endpoint Network Devices as well.
[00066] Returning to Fig. 13, the port 440 along with the switch logic 454 provide a switch function, such that data path routes can connect from one input interconnect (e.g. fiber) of cable 446, e.g., fiber 446A, through transceiver 442 where the optical signal is converted from an optical signal to an electrical data signal, and is then sent to the switch logic 454 via path 458A. The switch logic 454 routes the signal directly back to the transceiver 442 via path 458B where the transceiver converts the electrical data signal to an optical signal for transmission along a fiber in cable 446, e.g., fiber 446B, to connector 444. The path described above, where an incoming signal at connector 444 passes from fiber 446A through transceiver 442, and is routed by the switch logic back to the transceiver to the connector 444 via fiber 446B, does not terminate in protocol logic 452 or pass through the protocol logic to CPU 410, but instead the switch logic 454 under the control of the CPU 410, routes an incoming path directly to an outgoing path in the port 440. In this example, a signal from Endpoint Network Device 2 connects via fiber 464A into connector 462 which is coupled to connector 444. The fiber path 446A receives the optical signal and connects the signal to a port on multiport transceiver 442, which converts the received optical signal to an electrical data signal. The electrical data signal is fed via path 458A to the switch logic 454, which under control of CPU 410 has configured a connection 454A to path 458B into an outgoing path to transceiver 442. Transceiver 442 converts the data signal into an optical signal which then is transmitted over fiber 446B to outgoing fiber 464B which is connected to Endpoint Network Device 3. For a bidirectional communication path, a parallel path can be similarly set up from the Endpoint Network Devices, e.g., from Endpoint Network Device 3 to Endpoint Network Device 2. In this embodiment, the network interfaces 402 in the Endpoint Network Devices can switch a signal on an input path to one or more of multiple output paths directly, thus eliminating the need of an Application Layer Network Switch in at least some applications in the data center architecture. In another embodiment, the network interfaces 402 in the Endpoint Network Devices can switch a signal on an input path 464 to multiple output paths 464 for multicast or broadcast applications.
[00067] Fig. 14 shows an exemplary data center network architecture according to the present disclosure. In this embodiment, Aggregation Layer switches are replaced by direct endpoint network device to endpoint network device connections. For example, direct server to server connections, direct server to storage connections, direct storage to storage connections, direct server to Distribution Layer connections, and direct storage to Distribution Layer connections. In this exemplary embodiment, data center endpoint network devices, e.g., data center network servers are deployed in cluster group 1 and cluster group 2. The data center network servers can be the data center network server 400 shown in Fig. 9 and described above. Further, the network interface deployed in the data center endpoint network device can be the exemplary network interfaces shown in Figs. 10, 11 or 12. For ease of description, this embodiment will be described as using the network interface 402 of Fig. 10. The data center network servers 400 with switch 454 and a port 440 having a multiport transceiver 442, has multiple external links which can directly connect to one or more data center endpoint network devices, such as switches, other servers, or storage devices. Similarly, data center network storage 480 with switch 454 and a port 440 having a multiport transceiver 442 has multiple external links which can directly connect to one or more data center endpoint network devices, such as switches, servers, or other storage devices. In Fig. 14, data center network server 400 A has direct connections not only to the distribution layer data network switch 16A and 16B via Path A and Path B, but also to data center network server 400B, 400C, and 400D via Path C, Path D, and Path E. The collection of data center network servers 400A, 400B, 400C, and 400D as shown in this embodiment is referred to as a cluster. In addition to simplifying the network interconnects and eliminating switching equipment, data center network server 400 A can pass data traffic directly to data center network server 400B, 400C, and 400D without incurring the switch delay within an Aggregation Layer network switch 14.
[00068] In this embodiment, each endpoint network device, e.g., server 400, has a path to an alternate distribution switch for redundancy. Thus, in the event of a Path A failure, server 400A can interconnect to the rest of the network by connecting to Path C, Path D or Path E to data center network servers 400B, 400C, or 400D, which in turn can switch the data traffic up to the distribution layer data center network switch 16A or 16B.
[00069] The cluster group size which is the number of data center network servers or storage devices that can be interconnected to each other or to distribution layer switches increases as the number of ports supported by network interface switch logic increases.
[00070] Referring to Fig. 15, another embodiment of a data center network architecture is shown. In this embodiment, the distribution layer has high density path switches 492. The high density path switches 492 support a larger number of paths that can be switched when compared to a standard distribution layer switch 16. Examples of a suitable high density path switches are described in Appendix A, which is attached to the present application and part of the present disclosure.
[00071] Since a larger number of ports can be supported on the data center high density path switches 492, a number of different network topology configurations may be implemented. In one embodiment, the number of redundant paths in each cluster can be increased. In another embodiment, additional paths can be connected from the data center network endpoint network devices, e.g., servers 400, to the data center high density path switches 492 to be implemented as multiple paths in parallel for additional bandwidth. In another embodiment, the number of data center endpoint network devices, e.g., network servers 400 and network storage devices 480 capable of being supported within a larger cluster group can increase. In still another embodiment, combining the same path interconnections as shown in the cluster groups in Fig 14 with the data center high density path switches 492, the number of cluster groups interconnected by a single data center high density path switch 492 increases thus reducing the total number of distribution layer switches for the same number of data center endpoint network devices, e.g., servers 400 and storage devices 480. In another configuration, the increase of the path interconnections at the data center endpoint network devices level, e.g., the server and storage device level, can reduce or eliminate the reliance on the core switching layer, as shown in Fig 16. In Fig. 16, the data center high density path switches 492 include a network interface, such as network interface 402 seen in Fig. 10, or NIC 500 seen in Fig. 11 or NIC 700 of Fig. 12. With the switching function built into the data center high density path switches, the switches 492 can be interconnected to provide the interconnections needed to connect to all endpoint network devices in the data center network configuration.
[00072] It will be understood that various modifications can be made to the embodiments of the present disclosure without departing from the spirit and scope thereof. Therefore, the above description should not be construed as limiting the disclosure, but merely as embodiments thereof. Those skilled in the art will envision other modifications within the scope and spirit of the invention as defined by the claims appended hereto. For example, the network interfaces contemplated by the present disclosure can utilize various communication protocols for network communications. Further, the network interfaces may use various embodiments of transceivers and connectors for communication paths. As another example, the data center network architectures contemplated by the present disclosure can include single layer and multi-layer switching layers.
IN THE UNITED STATES PATENT AND TRADEMARK OFFICE
Provisional Patent Application for
SYSTEM FOR INCREASING FIBER PORT DENSITY IN DATA CENTER
APPLICATIONS
INVENTOR(S): Mohammad H. Raza, Cheshire, CT (US); David G. Stone, Irvine, CA (US); Aristito Lorenzo, PlantsviUe, CT (US); Ronald M. Plante, Prospect, CT (US); John R. Lagana, West Nyack, NY (US)
BACKGROUND [0001] Field
[0002] The present application relates generally to network equipment typically used in data centers, and more particularly to network devices with increased port density and efficiency.
[0003] Description of the Related Art
[0004] Traditionally, data center network devices, such as servers, storage devices, switches, and routers, as well as NIC cards that may be added to such devices have physical connection points to transmit and receive data. These connection points generally include a transceiver and a connector, which are often referred to as a port. Ports can be copper or fiber ports that are built into the device, or the ports can be plug-in modules that contain the transceiver and connector and that plug into Small Form Factor (SFF) cages intended to accept the plug-in transceiver/connector module, such as SFP, SFP+, QSFP, CFP, CXP, and other transceiver/connector modules, where the connector extends from an exterior surface of the device, e.g., from a front panel. Fiber ports may be low density or single fiber ports, such as FC, SC, ST, LC, or the fiber ports may be higher density MPO, MXC, or other high density fiber ports.
[0005] Fiber optic cabling with the low density FC, SC, ST, or LC connectors or with SFP, SFP+, QSFP, CFP, CXP or other modules either connect directly to the data center network devices, or they pass through interconnector cross connect patch panels before getting to the data center network devices. The cross connect patch panels have equivalent low density FC, SC, ST, or LC connectors, and may aggregate individual fiber strands into high density MPO, MXC or other connectors that are primarily intended to reduce the quantity of smaller cables run to alternate panels or locations.
[0006] Fig. 1 shows a prior data center network device 10, that is a network switch, with ports 110, each having a transceiver 111 and connector 112, mounted internally to the device 10, such that the connector extends out of a front or rear panel of the device. CPU 102 configures switch logic 104 to direct internal data streams (not shown) out via paths 108 through transceiver 111 and connector 112 in port 110. Ports 110 may be copper or fiber ports. Typically, a copper cable (cable 114A) is terminated with an RJ-45 connector (connector 116A), while fiber cable (cable 114B) is terminated with an FC, SC, ST, or LC connector (cable 116B).
[0007] Fig. 2 shows a prior data center network device 20 where SFF cages 118 and 124 are mounted within the device 20, typically to a front or rear panel, and external transceiver/connector modules can be inserted into SFF cages 118 or 124. CPU 102 configures switch logic 104 to direct internal data streams (not shown) out via paths 108 through transceiver 121 and connector 122, or through transceiver 126 and connector 128. In this configuration, connectors 122 can consist of either single copper RJ-45 connectors, or single or duplex fiber connectors. Duplex fibers in this case are for bidirectional path communications. Connectors 128 can consist of multi-fiber connectors, such as MPO multifiber connectors.
[0008] Using SFP or SFP+ transceiver modules permits a single connection to be configured between two data center network devices at data rates of up to lOGbps. Using QSFP, CFP, CXP, or other transceivers permits a single connection to be configured between two data center network devices at data rates of up to and beyond lOOGbps.
[0009] MPO multifiber connectors are used for IEEE 802.3ba industry standard 40Gbps and lOOGbps bandwidth fiber connections. Fig. 3 shows IEEE 802.3ba 40GBASE-SR4 optical lane assignments where 40Gbps bandwidth is achieved by running four fibers of lOGbps in one direction (Tx) for the 40Gbps transmit path, and four fibers of lOGbps in the other direction (Rx) for the 40Gbps receive path. This means four fibers in the 12 fiber MPO are unused, thus decreasing connector and cable efficiency.
[00010] lOOGbps bandwidth fiber connections are achieved by running 10 fibers of lOGbps in one direction (Tx) for the lOOGbps transmit path, and 10 fibers of lOGbps in the other direction (Rx) for the lOOGbps receive path. Fig. 4A shows two IEEE 802.3ba 100GBASE-SR10 optical lane assignments for 12 fiber MPO's, where one MPO uses 10 fibers of lOGbps for the lOOGbps transmit path (Tx), leaving 2 fibers unused, and the other MPO uses 10 fibers of lOGbps for the lOOGbps receive path (Rx), leaving 2 fibers unused, again decreasing connector and cable efficiency. Fig. 4B shows a 24 fiber MPO, where 10 fibers of lOGbps are used for the lOOGbps transmit path (Tx), plus 10 fibers of lOGbps are used for the lOOGbps receive path (Rx), leaving a total of 4 unused fibers, again decreasing connector and cable efficiency.
[00011] There also exists a standard for lOOGbps transmission which uses four 25Gbps fiber data rate connections configured similar to the 40Gbps standard, where eight fibers (four transmit and four receive fibers) are used in a 12 fiber MPO. Implementing this standard means that four fibers in a 12 fiber MPO are not used, again decreasing connector and cable efficiency.
[00012] In each of these cases, the industry standard method of migrating from a lOGbps connection to a 40Gbps or lOOGbps connection, or from a 40Gbps connection to a lOOGbps connection requires reconfiguring the fiber transmit and receive paths by physically changing the ports within the data center network devices increasing the cost to run the data center. Adding further to the cost to run the data center is that this change has to occur at both ends of the path (i.e., the receive port and the transmit port) as well as the cabling there between.
[00013] In many cases, the entire data center network device has to be upgraded as the transceiver/connector configuration of Fig. 1, or the transceiver/connector/SFF cage configuration of Fig. 2 cannot support the higher data rate speeds on the additional fiber ports associated with 40Gbps or lOOGbps ports. Further, in each of the configurations described above, fibers are left unused in the connectors and cables, thus wasting resources and unnecessarily increasing costs for the higher fiber cabling and connectors. To illustrate, connector 132 (seen in Fig. 2) is a 12 fiber MPO connector and fiber cable 130 is a 12 fiber cable. To use this cable and connector in a 40Gbps or lOOGbps application would leave 2 or 4 fibers unused, depending upon the type of port used.
[00014] Further, in current network devices the ports 110 (i.e., the transceiver 111 and connector 112 in Fig. 1, or the transceiver 121, connector 122 and SFF cage 118 in Fig. 2) are connected directly to front or rear panels of the network device. The physical size of the transceiver or SFF module significantly limits the number of connectors 112 or cages 118 that can be installed on the front or rear panels of the network device, thus limiting the ability to cost effectively increase port density.
SUMMARY
[00015] The present application relates generally to data center network device architectures that implement high density ports, low density ports and combinations of high density and low density ports, for effective use of data center network device panel space thus increasing port density without the need to replace network devices, connectors and/or transceivers. Data center network devices contemplated by the present application include servers, storage devices, NIC cards, switches, and routers.
[00016] By separating the transceivers from the panel connectors as disclosed herein, the present application introduces new methods for increasing the density of the optical interface circuitry within data center network devices to achieve higher density on the device front panel. Additionally, by using combinations of ports, dynamic mixing of speeds of fiber connections within high density fiber connectors on a per fiber basis can be achieved.
[00017] Port configurations disclosed in the present application also provides discovery of end-to end connectivity through the use of managed connectivity cable methods such as 9th wire, CPID, and other methods. Knowledge of the end to end physical
configurations in one or more paths, including the discovery of per port path connectivity permits data center management on a per port and per cable connector basis, including the ability to identify changes in state of a physical connection in real time. [00018] An exemplary embodiment of a data center network device according to the present application includes, a housing having one or more connection panels, and a set of ports. Each port within the set of ports is configured to receive data streams from an external medium and to transmit data streams to an external medium, and includes a connector and at least one transceiver optically coupled to the connector. The connector is mounted to the connection panel, and the at least one transceiver is mounted within the housing such that the at least one transceiver is separated from the connector. The at least one transceiver may be mounted to a circuit board within the housing or plugged into a cage, e.g., an SFF cage, mounted within the housing. The connector is optically coupled to the at least one transceiver using fiber cables and/or optical waveguides.
[00019] The transceivers employed in the present application may be low density transceivers, high density transceivers, or combinations of low density transceivers and high density transceivers. Examples of transceivers that may be used in the present application include, SFP, SFP+, QSFP, CFP, CXP, and WDM transceivers, and if the transceiver is pluggable in a cage, the cage would be a compatible cage for the transceiver used.
BRIEF DESCRIPTION OF THE DRAWINGS
[00020] Fig. 1 is a block diagram of a prior data center network device architecture with internal ports;
[00021] Fig. 2 is a block diagram of a prior data center network device architecture with external insertable ports;
[00022] Fig. 3 shows IEEE 802.3ba 40GBASE-SR4 optical lane assignments;
[00023] Fig. 4 shows IEEE 802.3ba 100GBASE-SR10 optical lane assignments;
[00024] Fig. 5 is a block diagram of an exemplary embodiment of a data center network device according to the present application with internally mounted insertable ports;
[00025] Figs. 5A-5C are block diagrams of exemplary embodiments of the different internally mounted insertable ports used in the data center network device of Fig. 5; [00026] Fig. 6 is a block diagram of another exemplary embodiment of a data center network device according to the present application with internal high density ports;
[00027] Figs. 6A-6G are block diagrams of exemplary embodiments of the different internally mounted insertable ports used in the data center network device of Fig. 6;
[00028] Fig. 7 is a block diagram of an exemplar}' embodiment of a data center NIC according to the present application with internal high density ports; and
[00029] Fig. 8 is a block diagram of an exemplary embodiment of a data center network device according to the present application with internal high density ports and intelligent managed connectivity capabilities.
DETAILED DESCRIPTION
[00030] In this disclosure, references to input and output, transmit and receive are used as references to simplify explanations. In actual practice, inputs may be outputs, they may switch direction from the output side to the input side, or they may be bidirectional signals. This is similar for the terms transmit and receive.
[00031] Referring to Fig 5, an exemplary high density data center network device 30 is shown. In this embodiment, the data center network device 30 is a network switch. However, the device 30 may be a server, storage device, NIC card, router or other data center network device.
[00032] In the embodiment of Fig. 5, the data center network device 30 includes a housing 32 for installation in a rack within the data center. The housing includes a front panel 34 and a rear panel 36 that can be used as a connection point for external connection to other data center network devices. To connect the data center network device 30 with other data center network devices, a set of ports is used for transmitting and receiving of data streams between the data center network device 30 and other external data center network devices. As noted, the data center network device in the embodiment of Fig. 5 is a switch, which includes switch logic 538 connected to each port via interconnections 540, and a CPU 542 connected, via interconnection 544, to the switch logic 538. The CPU 542 is configured to control the switch logic 538, and thus the flow of data streams from one port to the same or another port within the switch. [00033] The ports that may be used in the set of ports contemplated by the present application may vary. For the purpose of this application, a port includes any of the port types described herein, but this disclosure is not intended to limit the ports contemplated herein and are provided as exemplary embodiments for the ports that may be used.
Referring to Figs. 5A-C, three different port types 500, 510 and 520 are employed to further increase the panel density in the embodiment of Fig. 5. The first port type 500, shown in Fig. 5A, is a low density port having a low density panel connector 502, a compatible low density cable 504 connected between the connector 502 and a compatible low density transceiver in SSF 506 mounted within the housing 32. The low density panel connector 502 is preferably an FC, SC, ST, LC, or other type of single or duplex fiber connector, and the compatible low density transceiver in SFF 506 is an SFP, SFP+, or other type of single or duplex fiber transceiver plugged into an SFF cage configured to receive the pluggable transceiver. External connections to the low density ports 500 are with single fiber or duplex fiber cables 552 using FC, SC, ST, LC, or other types of single or duplex fiber connector 550.
[00034] The second port type employed in the embodiment of Fig. 5 is a high density port 510, shown in Fig. 5B, having panel connector 512, and a compatible high density cable 514 connected between the connector 512 and a compatible high density transceiver in SFF 516 mounted within the housing 32. The high density panel connector 512 is preferably an MPO, MXC or other high density multi-fiber panel connector used for industry standard 40Gbps and lOOGbps applications, and the compatible high density transceiver in SFF 516 is a QSFP, CFP, CXP type, or other high density pluggable transceiver used for industry standard 40Gbps and lOOGbps applications plugged into an SFF cage configured to receive the pluggable transceiver. This configuration is to support industry standard 40Gbps and lOOGbps using lOGbps data rates per fiber, or lOOGbps using 25Gbps data rates per fiber employed. To support the industry standard application of 40Gbps or lOOGbps, panel connector 512 is configured according to industry standard fiber configurations. External connections to the high density ports 510 are with multi-fiber cables 556 using MPO, MXC or other high density multi-fiber connectors554. [00035] The third port type employed in the embodiment of Fig. 5 is a high density port 520, shown in Fig. 5C, having panel connector 522, multiple compatible high density cables 524 connected between the connector 522 and multiple compatible high density transceivers in SFF 526 mounted within the housing 32. The high density panel connector 522 is a multi-fiber MPO or MXC type panel connector coupled to three compatible high density transceivers in SFF 526, such as SFP, SFP+, QSFP, CFP, CXP type, or other high density transceivers plugged into an SFF cage configured to receive the pluggable transceiver. The third port configuration permits multiple simplex or duplex fiber communications paths from one or more transceivers in SFF 526 to a single MPO or MXC connector 522 independent of each other. External connections to the high density ports 520 are with multi-fiber cables 558 using MPO, MXC or other high density multi-fiber connectors 560.
[00036] The pluggable transceivers used in each port may be low density or high density transceivers or a combination of low density and high density transceivers. A transceiver has a receiver which receives a data stream from an external medium connected to the data center network device 30, and a transmitter which transmits a data stream to the external medium connected to the data center network device. Examples of low density transceivers include SFP, SFP+, type transceivers, and examples of high density transceiver include QSFP, CFP, CXP type, or other high density transceivers.
Transceiver chips, such as the FTLX8571D3BCV, manufactured by Finisar Corp. may be employed as the low density transceiver, and transceiver chips, such as the
FTLQ8181EBLM, also manufactured by Finisar Corp. may be employed as the high density transceiver.
[00037] It should be noted that the present application is not limited to connectors, transceivers and/or SSF cage configurations capable of supporting data rates of up to lOOGbps. The embodiments of the present application can also support data rates greater then lOOGbps.
[00038] In the embodiment of Fig. 5, the transceivers in SFF 506, 516 and 526 are configured in the housing in a staggered arrangement away from the front panel 34 (or rear panel 36) such that each transceiver in SFF is not connected directly to the front panel 34. Only the connectors 502, 512 and 522 are connected to the front panel 34 (or rear panel 36) of the housing 32. This configuration allows more connectors to be connected to the front panel of the device 30, thus increasing the panel density of the device.
[00039] The data center network device 30 of the present application permits multiple lOGbps, 40Gbps, and lOOGbps connections in the same high density connectors 522. Currently, high density MPO connectors can support up to 72 fibers, while high density MXC connectors can support up to 64 fibers. As such, the fiber cable group 560, for example, can fan out to as many ports needed to support the desired fibers for the high density connector 558. The fibers in cable 560 may all terminate into a single data center network device at a remote end of the cable 560, or may be split up via interconnect panels, cross connect panels, hydra cables or other devices capable of splitting the fiber cables, such that the fiber ends are physically routed to different data center network devices. By employing a combination of low and high density ports in the embodiment of Fig. 5, and the staggered transceiver module arrangement, the fiber count is significantly increased, thus further increasing the panel density.
[00040] Referring now to Fig. 6, another embodiment of a data center network device according to the present application is disclosed. In this embodiment, the data center network device 60 is a network switch. However, the device 60 may be a server, storage device, NIC card, router or other data center network device. The data center network device 60 includes a housing 32, for installation in a rack within the data center. The housing 32 includes a front panel 34 and a rear panel 36 that can be used as a connection point for external connection to other data center network devices. To connect the data center network device 60 with other data center network devices, a set of ports is used for transmitting and receiving of data streams between the data center network device 60 and other external data center network devices. As noted, the data center network device in the embodiment of Fig. 6 is a switch, which includes switch logic 692 connected to each port via interconnect 690, and a CPU 696 connected, via interconnect 694, to the switch logic 692. The CPU 696 is configured to control the switch logic 692, and thus the flow of data streams from one port to the same or another port within the switch. [00041] The ports that may be used in the set of ports contemplated by the present application may vary. For the purpose of this application, a port includes any of the port types described herein, but this disclosure is not intended to limit the ports contemplated herein and are provided as exemplary embodiments for the ports that may be used. Fig. 6 shows several embodiments of transceiver and port connections with additional details of these embodiments shown in Figs. 6A-6D and 6G, and with additional embodiments shown within Figs. 6E and 6F. These transceivers are collectively referred herein as transceivers 698 for ease of reference.
[00042] Individual lOGbps ports can be dynamically bonded together to create 40Gbps ports and/or to create lOOGbps ports to form multifiber connections between data center network devices. This capability enables data centers to dynamically scale from using data center network devices that operate using lOGbps ports to data center network devices that operate using 40Gbps, lOOGbps ports, or ports with data rates greater than 100 Gbps. Further, the ports of the present application permit the use of all fibers in the IEEE802.3ba 40GBASE-SR4 optical lane assignments or IEEE802.3ba 100GB ASE- SR10 optical lane assignments within the connector and allow data center network devices, e.g., interconnect panels and switches, to separate individual links from bonded links. This also permits the expansion of high density fiber configurations, e.g., 12 fiber MPO configurations, to 24, 48, 72, or greater high density fiber combinations in order to support multi-rate and multi-fiber applications in the same connector. This capability also permits the expansion of high density fiber configuration, e.g., 12 fiber MPO configurations, to MXC or other high fiber count configurations without the need for predefined bonding for multi-fiber applications in the same connector.
[00043] Additionally, by utilizing data center network devices according to the present application, such as interconnect panels and switches, to bond and un-bond fiber pairs, the data center network device can create bonded pairs that traverse multiple connectors. In most cases for this type of application, the two or more separate paths can be configured such that the connection medium is the same, and the overall length of each path is substantially the same to minimize differential delays. [00044] A further capability of the data center network device of the embodiment of Fig. 6, is the capability to permit multiple lOGbps, 40Gbps, and lOOGbps connections in the same high density connectors. By incorporating transceivers 698, which in this embodiment are multiport transceivers, connected via interconnect 690 to common switch logic 692, CPU 696 can program switch logic 692 to dynamically map the individual ports to a fiber cable such that all the fibers can be used within the connector to provide multi-rate communications capabilities within the same connector for different connection paths.
[00045] In one embodiment, switch logic 692 can be configured to provide a fixed data reception and transmission rate from one transceiver port to another transceiver port. In another embodiment, the switch logic 692 can be programmed by CPU 696 to receive one data rate from one receiver port and transmit out at a different rate on a different transmit port. The transceivers 698 and switch logic 692 provide the data rate retiming and data buffering necessary to support different rate transmit and receive connections.
[00046] Referring to Figs. 6A-6G, multiple different port types, some of which are shown in Fig 6, are employed to further increase the panel density. These may be implemented as a single embodiment for a particular data center network device, or more than one embodiment may be implemented in a data center network device. The first port 600, shown in Fig. 6A, includes a multi-port transceiver 602 and single or duplex fiber panel adapters 604, such as FC, SC, ST, LC, or other type of single or duplex fiber panel adapters. The transceiver 602 is connected to the panel adapter 604 via
interconnect 606. Interconnect 606 may be an optical fiber cable, optical waveguide, or other mechanism to couple the optical signals between the transceiver 602 and the front panel 34 (or rear panel 36) mounted fiber connector 604. This configuration is an example of a multi-port transceiver 602 configured as individual fiber connections independent of each other. One advantage of this configuration is that the port density can be much greater since the individual multi-port transceiver 602 occupies less printed circuit board real estate than multiple single port transceivers.
[00047] The second port 610, shown in Fig. 6B, includes a multi-port transceiver 612 and a high density panel connector 614, such as an MPO, MXC, or other high density connector. The transceiver 612 connects to the multi-fiber high density connector 614 via fiber interconnect 616. The interconnect 616 may be an optical fiber cable, optical waveguide, or other mechanism to couple the optical signals between transceiver 612 and the front panel 34 (or rear panel 36) mounted fiber connector 614. This configuration is an example of combining multiple independent simplex or duplex optical ports from a transceiver for connection to a single multi-fiber cable 682. This permits aggregation of multiple independent fiber links for delivery to a single endpoint or to be separated within patch panels, hydra cables, or other mechanisms to be distributed to different end destinations or nodes.
[00048] The third port 620, shown in Fig. 6C, includes a transceiver 622 and a high density multi-fiber connector 624, such as an MPO, or other high density fiber connector used for industry standard 40Gbps and lOOGbps applications. The transceiver 622 is connected to connector 624 via a compatible multi-fiber interconnect 626. The interconnect 626 may be an optical fiber cable, optical waveguide, or other mechanism to couple the optical signals between transceiver 622 and the front panel 34 (or rear panel 36) mounted fiber connector 624. This configuration supports industry 40Gbps and lOOGbps connections using lOGbps data rates per fiber, or lOOGbps connections using 25Gbps data rates per fiber. In this port embodiment, the transceivers bond individual transceiver ports together as low skew transmission and receive groups of channels to form multi-fiber connections to a data center network device connected to the far end of the cable that is connected to the connector 624. In this way, the transceiver can provide 40Gbps, lOOGbps or greater transmission rates. To support the industry standard application of IEEE 802.3ba 40GBASE-SR4 or IEEE 802.3ba 100GB ASE- SRI 0, panel connector 624 can be configured according to the industry standard fiber configurations. With this implementation, 8 fibers would be used for data transmission for 40GBASE- SR4 applications, or 10 fibers would be used for 100GBASE-SR10 with the remaining fibers in the MPO connector not configured to pass data.
[00049] The fourth port 630, shown in Fig. 6D, includes a multi-port transceiver 632 and panel connectors 634, such as FC, SC, ST, LC, or other type of single or duplex fiber panel adapters, MPO, MXC, or other high density connectors, or any combination of these connectors. The transceiver 632 connects to the panel connectors 634 via fiber interconnect 636. The interconnect 636 may be an optical fiber cable, optical waveguide, or other mechanism to couple the optical signals between transceiver 632 and the front panel 34 (or rear panel 36) mounted fiber connectors 634. This configuration is an example of combining multiple independent simplex or duplex optical fibers from a multi-port transceiver for connection to single fiber cables or to multi-fiber cables 678 (seen in Fig. 6). This permits aggregation of multiple independent fiber links into multiple connector types for delivery to a single or different endpoints or to be separated within patch panels, hydra cables, or other mechanisms to be distributed to different end destinations.
[00050] The fifth port 640, shown in Fig. 6E, includes a multi-port transceiver (i.e., a transceiver with multiple connection ports) 642 and panel connectors 644, consisting of an MPO connector as well as FC, SC, ST, LC, or other type of single or duplex fiber panel adapters. The transceiver 642 connects to the panel connectors 644 via fiber interconnect 646. The interconnect 646 may be an optical fiber cable, optical waveguide, or other mechanism to couple the optical signals between transceiver 642 and the front panel (or rear panel) mounted fiber connectors 644. This configuration is an example of combining industry standard 40Gbps and lOOGbps connections using lOGbps data rates per fiber and independent lOGbps fiber connections in the same transceiver 642. In this port embodiment, the transceivers can bond four or 10 individual transceiver ports together as low skew transmission and receive groups of channels to form multi-fiber connections to a data center network device connected to the far end of the cable that is connected to connector 644. In this way, the transceiver can provide 40Gbps or lOOGbps transmission rates or transmission rates greater than lOOGbps. To support the industry standard application of IEEE 802.3ba 40GBASE-SR4 or IEEE 802.3ba 100GBASE- SR10, panel connectors 644 can be configured with an MPO according to the industry standard fiber configurations plus additional connectors, such as FC, SC, ST, LC, or other type of single or duplex fiber panel adapters or an additional high density connector such as an MPO, MXC or other type to transport the remaining independent fiber links from transceiver 642. With this implementation, 8 fibers would be used for data transmission for 40GBASE-SR4 applications or 10 fibers would be used for 100GBASE-SR10 with the remaining fibers in the MPO connector not configured to pass data. [00051] The sixth port 650, shown in Fig. 6F, includes a transceiver 652 and a high density multi-fiber connector 654, such as an MPO, or other high density fiber connector. The transceiver 652 connects to the panel connectors 654 via fiber interconnect 656. The interconnect 656 may be an optical fiber cable, optical waveguide, or other mechanism to couple the optical signals between transceiver 652, and the front panel 34 (or rear panel 36) mounted fiber connectors 654. This configuration is an example of combining industry standard 40Gbps and lOOGbps connections using lOGbps data rates per fiber and independent lOGbps fiber connections in the same transceiver 652 and in the same panel connector 654. In this port embodiment, the transceivers can bond four or ten individual transceiver ports together as low skew transmission and receive groups of channels to form multi-fiber connections to a data center network device connected to the far end of the cable that is connected to connector 654. In this way, the transceiver can provide 40Gbps or lOOGbps transmission rates or transmission rates greater than lOOGbps. With this implementation, the connector 546 can carry all the fiber
connections from transceiver 652. This permits aggregation of 40GBASE-SR4 applications or 100GB ASE-SR10 along with independent fiber links for delivery to a single endpoint or to be separated within patch panels, hydra cables, or other mechanisms to be distributed to different end destinations.
[00052] The seventh port 660, shown in Fig. 6G, includes multiple transceiver modules 662 and a high density panel connector 664, such as an MPO, MXC, or other high density connector. The transceiver modules 662 connect to the multi-fiber high density connector 664 via fiber interconnect 666. The interconnect 666 may be an optical fiber cable, optical waveguide, or other mechanism to couple the optical signals between transceivers 662 and the front panel 34 (or rear panel 36) mounted fiber connectors 664. This configuration is an example of combining multiple ports from one or more transceivers for connection to fiber connections in a single multi-fiber cable 666, and permits multiple simplex or duplex fiber, 40GBASE-SR4, 100GBASE-SR10, or other communications paths from one or more transceivers to a single high density connector 664 independent of each other. This permits aggregation of multiple 40GBASE-SR4 applications, 100GBASE-SR10 along with independent fiber links for delivery to a single endpoint or to be separated within patch panels, hydra cables, or other mechanisms to be distributed to different end destinations. Currently, high density MPO connectors can support up to 72 fibers and high density MXC connectors can support up to 64 fibers. As a result, fiber cable group 686 (seen in Fig. 6) can fan out to as many transceivers as needed to support the desired fibers for the connector 664.
[00053] In the embodiment of Fig. 6, each transceiver is preferably a multiport transceiver that is built into data center network device 60 instead of the embodiment of Fig. 5 where the transceiver is plugged into an SFF cage. Each transceiver is preferably dedicated for a particular industry standard application, such as a 40GBASE-SR4, 100GBASE-SR10 application, or can be individual ports configurable and either independent of one another or capable of being grouped together into a bonded high speed collection of fiber paths. Each transceiver may physically consist of a single multiport transceiver, or may be a multiport transmitter component paired with a multiport receiver component. Examples of suitable multiport transceivers include the FBOTD10SL1C00 12-Lane Board-mount Optical Assembly manufactured by Finisar Corp. Examples of multiport transmitter components and paired multiport receiver components include the AFBR-77D1SZ- Twelve-Channel Transmitter and AFBR-78D1SZ- Twelve-Channel Receiver
manufactured by Avago Technologies. The transceivers may be configured in the housing 32 in a staggered arrangement away from the front panel 34 (or rear panel 36) such that the transceivers are not connected directly to the front panel 34 (or rear panel 36). This configuration allows more connectors to be connected to the front panel (or rear panel) of the device 60, thus increasing the panel density of the device. By utilizing multiport transceivers and building them into the data center network device in a staggered arrangement as described, the panel density of the data center network device is further increased over the increased panel density provided by the embodiment of Fig. 5. In another embodiment, single transmission connections, such as lGbps, 25Gbps, 56Gbps, or other transmission rates, may be intermixed in the same high density connector, e.g., an MPO or MXC or other high fiber connector, with Wavelength Division Multiplexor (WDM) fiber transmission schemes, such as Coarse Wavelength Division Multiplexor (CWDM), Dense Wavelength Division Multiplexor (DWDM), or other WDM capabilities, such as silicon photonics interfaces where multiple wavelengths may be transmitted or received over a single input fiber. [00054] For clarity, a port as described herein is a component having a transceiver and connector, as described with reference to Fig. 5. For the embodiment of Fig. 6, a transceiver port relates to multiport transceivers where each transceiver port of the transceiver is independently capable of receiving a data stream from an external medium connected to the data center network device, and transmitting a data stream to the external medium connected to the data center network device.
[00055] Referring now to Fig. 7, another embodiment of a data center network device according to the present application is disclosed. In this embodiment, a Network Interface Card (NIC) 70 is shown with a port configured by high density connector 702 and multiport transceiver 704. Like the above described embodiments, the transceiver 704 may be a transceiver chip mounted to the NIC 70, or a pluggable transceiver and an SSF cage mounted to the NIC 70, or a separate transmitter and receiver mounted to the NIC 70. The NIC is a plug-in card to a data center network device which provides an interface for the data center network device to interconnect to an external medium. The NIC card contains the desired interface for a particular application, such as a copper Ethernet interface, Wi-Fi interface, serial port, Fibre Channel over Ethernet (FCoE) interface, or other media interface. In the embodiment of Fig. 7, the NIC interconnects to the data center network device via a Peripheral Component Interconnect (PCI) Interface Connection 712, as one common device interconnect standard. In this embodiment, the data center network device CPU configures and controls the NIC via PCI interface logic 714 over PCI Interface bus 716.
[00056] Preferably, each NIC card is designed for a specific application or
implementation. In this embodiment, function block 708 provides control logic to convert the PCI Interface data stream format into a data stream format for transceiver 704 and vice versa. The transceiver 704 provides the OSI Layer 1 physical layer interface for the external port 702 interface, while functional block 708 provides the OSI layer 2 processing for the external communications. Depending upon the NIC implementation, additional OSI Layer functions may also be included within the NIC card. Transceiver 704 connects to the multi-fiber high density connector 702 via fiber interconnect 766. The interconnect 766 may be an optical fiber cable, optical waveguide, or other mechanism to couple the optical signals between transceiver 704 and the NIC edge panel mounted fiber connectors 702.
[00057] The NIC can be installed within a data center network device to create a high density data center network device as described herein. In the embodiment of Fig. 7, one transceiver 704 is shown on the NIC 70, but more than one transceiver module may be added to the NIC 70 similar to the embodiments shown in Figs. 5 and 6. The ports can be configured to support individual lOGbps data rates, 40Gbps, or lOOGbps data rates, or data rates greater than lOOGbps, as described above. Similarly, the connections can be individual fiber connections, IEEE802.3ba 40GBASE-SR4 optical lane assignments, IEEE802.3ba 100GBASE-SR10 optical lane assignments, or may be dynamically configured by the data center network device CPU.
[00058] Each fiber connector may have one or more associated Light Emitting Diodes (LEDs) used for status and control information. Each LED may be a single color or multicolor LED as determined for the product implementation. Each LED may have a blink rate and color used to identify specific states for the port. The LEDs can be illuminated by the data center network device CPU to indicate information, and may include port status for a single active port or multiple ports for each connector. The LEDs can also be used during installation or Moves-Adds-and-Changes to indicate to data center personnel which connector port is to be serviced. The data center network device CPU may also indicate port status information by a Liquid Crystal Display (LCD) located near the panel connectors.
[00059] Referring to Fig. 8, another embodiment of a data center network device 90 according to the present application is disclosed. In this embodiment, the data center network device is similar to the device described above with reference to Fig. 6 as well as the Network Interface card shown in Fig. 7, and permits the implementation of the capability to interpret cable information from cables connected to the data center network device 90, by obtaining intelligent information from within the cables. In addition to interfacing to standard cables 672, 676, 678, and others not shown, adapters 920, 922, 924, 926 have the capability, via interface 906, to detect the presence of a cable connector 670, 674, 680, 684, 688, 970, 980, 984, 988, and others not shown, inserted into intelligent adapter 920, 922, 924, 926, and in the case of intelligence equipped cable connector 970, 980, 984, 988, and others not shown, read specific cable information by reading the information in cable media 910. To ascertain cable information, the data center network device 90 may be designed with ninth wire technologies interfaces, RFID tagging technology interfaces, connection point ID (CPID) technology interfaces, or other cable managed intelligence technologies. In another embodiment, the data center network device 90 may be designed with one or more of these different technology interfaces in order to provide the capabilities of supporting more than one particular managed intelligent technology.
[00060] Each data center network device 90 equipped with intelligent cable interfaces has the capability to determine the cable presence and/or cable information available to the interface depending upon the information provided from the intelligent cable.
[00061] The cable information read from media interface adapter 906 via media interface bus 904 by media reading interface 902 and provided to CPU 942 may include for each cable connection of the cable type, cable configuration, cable length, cable part number, cable serial number, and other information available to be read by media reading interface 902. This information is collected by media reading interface 902 and passed to the CPU 942 via control bus 944. The CPU 942 can use this information to determine end to end information regarding the overall communication path and the intermediary connections which make up an end-to-end path.
[00062] As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "module" or "system."
[00063] Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
[00064] With certain illustrated embodiments described above, it is to be appreciated that various non-limiting embodiments described herein may be used separately, combined or selectively combined for specific applications. Further, some of the various features of the above non-limiting embodiments may be used without the corresponding use of other described features. The foregoing description should therefore be considered as merely illustrative of the principles, teachings and exemplary embodiments of this invention, and not in limitation thereof.
[00065] It is also to be understood that the above-described arrangements are only illustrative of the application of the principles of the illustrated embodiments. Numerous modifications and alternative arrangements may be devised by those skilled in the art without departing from the scope of the illustrated embodiments, and the appended claims are intended to cover such modifications and arrangements.
What is claimed is:
1. A data center network device, comprising: a housing having one or more connection panels; and a set of ports, wherein each port within the set of ports is configured to receive data streams from an external medium, and to transmit data streams to an external medium, wherein each port in the set of ports includes a connector and at least one transceiver optically coupled to the connector, and wherein the connector is mounted to the connection panel for connecting to external media, and the at least one transceiver is mounted within the housing such that the at least one transceiver is separated from the connector.
2. The data center network device according to claim 1, wherein the at least one transceiver is mounted to a circuit board within the housing.
3. The data center network device according to claim 1, wherein the connector is optically coupled to the at least one transceiver using at least one fiber cable.
4. The data center network device according to claim 1, wherein the connector is optically coupled to the at least one transceiver using at least one optical waveguide.
5. The data center network device according to claim 1, wherein the at least one transceiver comprises at least one multiport transceiver.
6. The data center network device according to claim 1, wherein the at least one transceiver comprises a plurality of transceivers.
7. The data center network device according to claim 6, wherein the plurality of transceivers comprise low density transceivers.
8. The data center network device according to claim 6, wherein the plurality of transceivers comprise high density transceivers.
9. The data center network device according to claim 6, wherein at least one of the plurality of transceivers comprise a combination of low density transceivers and high density transceivers. 10. The data center network device according to claim 1, wherein the at least one transceiver comprises a pluggable transceiver and a cage for receiving the pluggable transceiver.
11. The data center network device according to claim 10, wherein the cage comprises an SSF cage.
12. The data center path switch according to claim 10, wherein the pluggable transceiver comprises an SFP transceiver, and the cage comprises corresponding SFF cage.
13. The data center path switch according to claim 10, wherein the pluggable transceiver comprises an SFP+ transceiver, and the cage comprises corresponding SFF cage.
14. The data center path switch according to claim 10, wherein the pluggable transceiver comprises a QSFP transceiver, and the cage comprises corresponding SFF cage.
15. The data center path switch according to claim 10, wherein the pluggable transceiver comprises a CFP transceiver, and the cage comprises corresponding SFF cage.
16. The data center path switch according to claim 10, wherein the pluggable transceiver comprises a CXP transceiver, and the cage comprises corresponding SFF cage.
17. The data center path switch according to claim 10, wherein the pluggable transceiver comprises an WDM transceiver, and the cage comprises corresponding SFF cage.
18. The data center network device according to claim 1, wherein the connector comprises a simplex, duplex, or high density fiber connector.
19. The data center network device according to claim 18, wherein the high density fiber connector comprises MPO connectors.
20. The data center network device according to claim 1, wherein the high density fiber connector comprises MXC connectors. 21. The data center network device according to claim 1, wherein the at least one transceiver comprises at least one multiport transceiver, and wherein each of the at least one multiport transceiver ports can be connected to individual fiber connections on the connector.
22. The data center network device according to claim 1, wherein the at least one transceiver comprises at least one multiport transceiver and the connector comprises high density fiber connector, and wherein each of the multiport transceiver ports can be connected to the high density fiber connector with individual fiber connections.
23. The data center network device according to claim 1, wherein the at least one transceiver comprises at least one multiport transceiver and the connector comprises IEEE 802.3ba 40GBASE-SR4 or 100GBASE-SR10 connector configurations, and wherein each of the multiport transceiver ports can be connected to the connectors
24. The data center network device according to claim 1, wherein the at least one transceiver comprises at least one multiport transceiver and each of the multiport transceiver ports can be split between different connection panel connectors.
25. The data center network device according to claim 1, wherein the at least one transceiver comprises at least one multiport transceiver and each of the multiport transceiver ports can be combined into a single high density connection panel connector.
26. The data center network device according to claim 1, wherein the at least one transceiver comprises at least one multiport transceiver and connector comprises a plurality of connectors, wherein at least one of the plurality of connectors comprises an IEEE 802.3ba 40GBASE-SR4 or 100GBASE-SR10 connector configurations and at least one of the plurality of connectors comprises a high density fiber connector.
27. The data center network device according to claim 1, wherein a data stream from a receiving port of a line rate may be coupled to an outgoing transmission port at a different line rate.
28. The data center network device according to claim 1, wherein one or more of the ports in the set of ports comprise managed connectivity ports capable of reading a physical location identification from a managed connectivity port from an external medium connected to the one or more ports in the set of ports.
29. A network switch, comprising: a housing having one or more connection panels; and a set of ports, wherein each port within the set of ports is configured to receive data streams from an external medium, and to transmit data streams to an external medium, wherein each port in the set of ports includes a connector and at least one transceiver optically coupled to the connector, and wherein the connector is mounted to the connection panel for connecting to external media, and the at least one transceiver is mounted within the housing such that the at least one transceiver is separated from the connector.
30. The network switch according to claim 29, wherein the at least one transceiver is mounted to a circuit board within the housing.
31. A network server, comprising: a housing having one or more connection panels; and a set of ports, wherein each port within the set of ports is configured to receive data streams from an external medium, and to transmit data streams to an external medium, wherein each port in the set of ports includes a connector and at least one transceiver optically coupled to the connector, and wherein the connector is mounted to the connection panel for connecting to external media, and the at least one transceiver is mounted within the housing such that the at least one transceiver is separated from the connector.
32. The network server according to claim 31, wherein the at least one transceiver is mounted to a circuit board within the housing.
33. A network storage device, comprising: a housing having one or more connection panels; and a set of ports, wherein each port within the set of ports is configured to receive data streams from an external medium, and to transmit data streams to an external medium, wherein each port in the set of ports includes a connector and at least one transceiver optically coupled to the connector, and wherein the connector is mounted to the connection panel for connecting to external media, and the at least one transceiver is mounted within the housing such that the at least one transceiver is separated from the connector.
34. The network storage device according to claim 33, wherein the at least one transceiver is mounted to a circuit board within the housing.
35. A network router, compri sing : a housing having one or more connection panels; and a set of ports, wherein each port within the set of ports is configured to receive data streams from an external medium, and to transmit data streams to an external medium, wherein each port in the set of ports includes a connector and at least one transceiver optically coupled to the connector, and wherein the connector is mounted to the connection panel for connecting to external media, and the at least one transceiver is mounted within the housing such that the at least one transceiver is separated from the connector.
36. The network router according to claim 35, wherein the at least one transceiver is mounted to a circuit board within the housing.
37. A network NIC card, comprising: a housing having one or more connection panels; and a set of ports, wherein each port within the set of ports is configured to receive data streams from an external medium, and to transmit data streams to an external medium, wherein each port in the set of ports includes a connector and at least one transceiver optically coupled to the connector, and wherein the connector is mounted to the connection panel for connecting to external media, and the at least one transceiver is mounted within the housing such that the at least one transceiver is separated from the connector.
38. The network NIC card according to claim 37, wherein the at least one transceiver is mounted to a circuit board within the housing. ABSTRACT
A data center network device provides configurations where the port density can be increased by incorporating multiport transceivers within the device and the use of high density fiber connections on exterior panels of the device. The device also permits dynamically reassigning fiber connections to convert from single fiber connection paths to higher rate bonded fiber paths while at the same time making more efficient use of the fiber interconnections.
Figure imgf000051_0001
Figure imgf000053_0001
Figure imgf000054_0001
Figure imgf000055_0001
Figure imgf000056_0001
Figure imgf000057_0001
Figure imgf000059_0001
Figure imgf000060_0001
Figure imgf000061_0001
Figure imgf000062_0001

Claims

What is claimed is:
1. A endpoint network device, comprising: a central processing unit; and a network interface in communication with the central processing unit and having; at least one port configured to receive data streams from an external medium and to transmit data streams to an external medium, and a communication module that is capable of converting received data streams from a network protocol to a data format capable of being processed by the central processing unit, and capable of converting data in the data format processed by the central processing unit to a network protocol such that the network interface can transmit data streams.
2. The endpoint network device according to claim 1, wherein the at least one port comprises a set of ports, and wherein each port in the set of ports includes a connector and at least one transceiver optically coupled to the connector.
3. The endpoint network device according to claim 2, further comprising an enclosure housing the central processing unit and the network interface, and wherein the connector is mounted to a panel of the enclosure for connecting to external media, and the at least one transceiver is mounted within the housing such that the at least one transceiver is separated from the connector.
4. The endpoint network device according to claim 2, wherein the connector is optically coupled to the at least one transceiver using at least one fiber cable.
5. The endpoint network device according to claim 2, wherein the at least one transceiver comprises at least one multiport transceiver.
6. The endpoint network device according to claim 2, wherein the connector comprises a simplex, duplex, or high density fiber connector.
7. The endpoint network device according to claim 6, wherein the high density fiber connector comprises MPO connectors.
8. The endpoint network device according to claim 6, wherein the high density fiber connector comprises MXC connectors.
9. The endpoint network device according to claim 2, wherein the at least one transceiver comprises at least one multiport transceiver, and wherein each of the at least one multiport transceiver ports can be connected to individual fiber connections on the connector.
10. The endpoint network device according to claim 2, wherein the at least one transceiver comprises at least one multiport transceiver and the connector comprises high density fiber connector, and wherein each of the multiport transceiver ports can be connected to the high density fiber connector with individual fiber connections.
11. The data center network device according to claim 2, wherein the at least one transceiver comprises at least one multiport transceiver, and wherein each of the multiport transceiver ports can be configured as a redundant path connection.
12. The data center network device according to claim 2, wherein the at least one transceiver comprises at least one multiport transceiver, and wherein each of the multiport transceiver ports can be configured as an alternate path connection permitting data streams to be automatically switched under the central processing control to different endpoints.
13. The data center network device according to claim 2, wherein the at least one transceiver comprises at least one multiport transceiver, and wherein each of the multiport transceiver ports can be configured as switching an input data stream to a different port on an outgoing transceiver port without terminating the data stream on the endpoint network device.
14. The data center network device according to claim 2, wherein the at least one transceiver comprises at least one multiport transceiver, and wherein each of the multiport transceiver ports can be configured as switching an input data stream to multiple different ports on outgoing transceiver ports for multicast or broadcast without terminating the data stream on the endpoint network device.
15. The data center network device according to claim 2, wherein one or more of the ports in the set of ports comprise managed connectivity ports capable of reading a physical location identification from a managed connectivity port from an external medium connected to the one or more ports in the set of ports.
16. The endpoint network device according to claim 1, wherein the at least one port comprises a set of ports, and wherein each port in the set of ports includes a connector and a transceiver optically coupled to the connector.
17. The endpoint network device according to claim 16, further comprising an enclosure housing the central processing unit and the network interface, and wherein the connector and transceiver are mounted to a panel of the enclosure for connecting to external media.
18. The endpoint network device according to claim 16, wherein the transceiver comprises a multiport transceiver.
19. The endpoint network device according to claim 16, wherein the connector comprises a simplex, duplex, or high density fiber connector.
20. The endpoint network device according to claim 19, wherein the high density fiber connector comprises MPO connectors.
21. The endpoint network device according to claim 19, wherein the high density fiber connector comprises MXC connectors.
22. The endpoint network device according to claim 19, wherein the network interface is on a NIC.
23. A endpoint network device, comprising: a central processing unit; and a network interface having; at least one port configured to receive data streams from an external medium and to transmit data streams to an external medium, wherein the at least one port includes a multi-fiber connector, a multiport transceiver and a multi-fiber interconnect cable connecting the multi-fiber connector to the multiport transceiver, and a communication module in communication with the at least one port and with the central processing unit, and capable of switching data streams received on one fiber of the multi-fiber connector that pass through one fiber of the multi- fiber interconnect cable to the multiport transceiver, to the multiport transceiver through a second fiber of the multi-fiber interconnect cable to a second fiber of the multi-fiber connector in response to instructions from the central processing unit.
24. A data center network architecture, comprising: at least one cluster of endpoint network devices, wherein each endpoint network device includes; a central processing unit, and a network interface in communication with the central processing unit and having at least one port configured to receive data streams from an external medium and to transmit data streams to an external medium, and a communication module that is capable of converting received data streams from a network protocol to a data format capable of being processed by the central processing unit, and capable of converting data in the data format processed by the central processing unit to a network protocol such that the network interface can transmit data streams; a distribution layer of network switches; and a core switching layer.
25. A data center network architecture, comprising: at least one cluster of endpoint network devices, wherein each endpoint network device includes; a central processing unit; and a network interface having; at least one port configured to receive data streams from an external medium and to transmit data streams to an external medium, wherein the at least one port includes a multi-fiber connector, a multiport transceiver and a multi-fiber interconnect cable connecting the multi-fiber connector to die multiport transceiver, and a communication module in communication with the at least one port and with the central processing unit, and capable of switching data streams received on one fiber of the multi-fiber connector that pass through one fiber of the multi-fiber interconnect cable to the multiport transceiver, to the multiport transceiver through a second fiber of the multi-fiber interconnect cable to a second fiber of die multi-fiber connector in response to instructions from die central processing unit. a distribution layer of high density path switches; and a core switching layer.
26. A data center network architecture, comprising: at least one cluster of endpoint network devices, wherein each endpoint network device includes; a central processing unit, and a network interface in communication with the central processing unit and having at least one port configured to receive data streams from an external medium and to transmit data streams to an external medium, and a communication module mat is capable of converting received data streams from a network protocol to a data format capable of being processed by the central processing unit, and capable of converting data in the data format processed by the central processing unit to a network protocol such mat the network interface can transmit data streams; and a distribution layer of high density path switches.
27. A data center network architecture, comprising: at least one cluster of endpoint network devices, wherein each endpoint network device includes; a central processing unit; and a network interface having; at least one port configured to receive data streams from an external medium and to transmit data streams to an external medium, wherein the at least one port includes a multi-fiber connector, a multiport transceiver and a multi-fiber interconnect cable connecting the multi-fiber connector to the multiport transceiver, and a communication module in communication with the at least one port and with the central processing unit, and capable of switching data streams received on one fiber of the multi-fiber connector that pass through one fiber of the multi-fiber interconnect cable to the multiport transceiver, to the multiport transceiver through a second fiber of the multi-fiber interconnect cable to a second fiber of the multi-fiber connector in response to instructions from the central processing unit; and
a distribution layer of high density path switches.
RECTIFIED
PCT/US2016/026714 2015-04-09 2016-04-08 Data center endpoint network device with built in switch WO2016164769A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562145352P 2015-04-09 2015-04-09
US62/145,352 2015-04-09

Publications (1)

Publication Number Publication Date
WO2016164769A1 true WO2016164769A1 (en) 2016-10-13

Family

ID=57072241

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/026714 WO2016164769A1 (en) 2015-04-09 2016-04-08 Data center endpoint network device with built in switch

Country Status (1)

Country Link
WO (1) WO2016164769A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3461064A1 (en) * 2017-09-26 2019-03-27 Quanta Computer Inc. Method and system for automatically configuring fanout mode of a network switch port in an interconnected network
CN114759982A (en) * 2018-02-05 2022-07-15 黄贻强 Fan-out optical fiber cable transfer rack and backbone plane

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040029417A1 (en) * 2002-08-07 2004-02-12 Andy Engel Pluggable electrical transceiver module with high density form factor
US20060018329A1 (en) * 2004-07-26 2006-01-26 Enigma Semiconductor Network interconnect crosspoint switching architecture and method
US20060251419A1 (en) * 1999-01-15 2006-11-09 Cisco Technology, Inc. Method of allocating bandwidth in an optical network
US20090226181A1 (en) * 2008-02-13 2009-09-10 Fiber Connections Inc. Digital signal media conversion panel
US20100142544A1 (en) * 2007-03-14 2010-06-10 Zonit Structured Solutions, Llc Data center network distribution system
US20100215049A1 (en) * 2009-02-13 2010-08-26 Adc Telecommunications, Inc. Inter-networking devices for use with physical layer information
US20120008945A1 (en) * 2010-07-08 2012-01-12 Nec Laboratories America, Inc. Optical switching network
US20130179622A1 (en) * 2012-01-06 2013-07-11 Gary L. Pratt System and method for transmitting and receiving data using an industrial expansion bus
US20130194005A1 (en) * 2010-02-02 2013-08-01 Nokia Corporation Generation of differential signals
US20140036920A1 (en) * 2012-08-02 2014-02-06 International Business Machines Corporation Identifying a port associated with a network node to which a selected network link is connected
US20140317249A1 (en) * 2013-04-23 2014-10-23 Cisco Technology, Inc. Accelerating Network Convergence for Layer 3 Roams in a Next Generation Network Closet Campus

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060251419A1 (en) * 1999-01-15 2006-11-09 Cisco Technology, Inc. Method of allocating bandwidth in an optical network
US20040029417A1 (en) * 2002-08-07 2004-02-12 Andy Engel Pluggable electrical transceiver module with high density form factor
US20060018329A1 (en) * 2004-07-26 2006-01-26 Enigma Semiconductor Network interconnect crosspoint switching architecture and method
US20100142544A1 (en) * 2007-03-14 2010-06-10 Zonit Structured Solutions, Llc Data center network distribution system
US20090226181A1 (en) * 2008-02-13 2009-09-10 Fiber Connections Inc. Digital signal media conversion panel
US20100215049A1 (en) * 2009-02-13 2010-08-26 Adc Telecommunications, Inc. Inter-networking devices for use with physical layer information
US20130194005A1 (en) * 2010-02-02 2013-08-01 Nokia Corporation Generation of differential signals
US20120008945A1 (en) * 2010-07-08 2012-01-12 Nec Laboratories America, Inc. Optical switching network
US20130179622A1 (en) * 2012-01-06 2013-07-11 Gary L. Pratt System and method for transmitting and receiving data using an industrial expansion bus
US20140036920A1 (en) * 2012-08-02 2014-02-06 International Business Machines Corporation Identifying a port associated with a network node to which a selected network link is connected
US20140317249A1 (en) * 2013-04-23 2014-10-23 Cisco Technology, Inc. Accelerating Network Convergence for Layer 3 Roams in a Next Generation Network Closet Campus

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3461064A1 (en) * 2017-09-26 2019-03-27 Quanta Computer Inc. Method and system for automatically configuring fanout mode of a network switch port in an interconnected network
CN109560957A (en) * 2017-09-26 2019-04-02 广达电脑股份有限公司 Method for determining operation speed of network interface card and port fan-out configuration system
US10523500B2 (en) 2017-09-26 2019-12-31 Quanta Computer Inc. Method and system for automatically configuring fanout mode of a network switch port in an interconnected network
CN109560957B (en) * 2017-09-26 2021-10-08 广达电脑股份有限公司 Method for determining the operating speed of a network interface card and a port fan-out configuration system
CN114759982A (en) * 2018-02-05 2022-07-15 黄贻强 Fan-out optical fiber cable transfer rack and backbone plane

Similar Documents

Publication Publication Date Title
US11166089B2 (en) System for increasing fiber port density in data center applications
AU2015283976B2 (en) Data center path switch with improved path interconnection architecture
US11671330B2 (en) Network interconnect as a switch
US9989724B2 (en) Data center network
US8842988B2 (en) Optical junction nodes for use in data center networks
US20200192035A1 (en) High-density fabric systems interconnected with multi-port aggregated cables
EP2345181B1 (en) Methods and systems for providing full avionics data services over a single fiber
US9954608B2 (en) Method and apparatus for performing path protection for rate-adaptive optics
JP6605747B2 (en) Line card chassis, multi-chassis cluster router and packet processing
JP2018535613A (en) Line card chassis, multi-chassis cluster router and routing and packet processing
US20050213989A1 (en) Reconfigurable data communications system with a removable optical backplane connector
WO2016164769A1 (en) Data center endpoint network device with built in switch
US10623101B1 (en) Hyperscale photonics connectivity solution
US11340410B2 (en) Dimensionally all-to-all connected network system using photonic crossbars and quad-node-loop routing
US10116558B2 (en) Packet switch using physical layer fiber pathways
NZ722392B2 (en) Packet switch using physical layer fiber pathways

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16777387

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16777387

Country of ref document: EP

Kind code of ref document: A1