Network Teaming / Network Bonding

Network teaming, sometimes called network bonding in Linux, enables multiple network adapters on a computer to be placed into a team for the purposes of

Network Teaming / Network Bonding of NIC Ports

Network teaming and network bonding are also known as Ethernet bonding, port trunking, channel teaming, NIC teaming, NIC bonding, link aggregation, and so on. Linux also has a capability called teaming that is an evolution of its network bonding.

Network teaming / network bonding applies to both dedicated and shared ports (logical ports). It applies to NIC ports. It does not apply to HBA ports or InfiniBand HCA ports. (For more information on networking teaming / network bonding of InfiniBand HCA ports, see later subsection.)

A ClearPath Forward fabric supports network teaming / network bonding of NIC ports as follows: There are no limitations on teaming/bonding from the perspective of the ClearPath Forward platform firmware or configuration. That is, there are no limitations on the number of ports that can be teamed/bonded, and there are no restrictions on the pairing based on the port or slot combination or adapter type.

Microsoft Windows Network Teaming of NIC Ports

Windows operating system functionality provides the means to implement and manage network teaming. The administrator uses the Windows NIC Teaming Management UI tool or NIC Teaming Windows PowerShell commands. For information on how to implement and manage NIC teaming, refer to Windows documentation.

Microsoft Windows Server 2012 and 2012 R2 do not support Load Balancing/Failover (LBFO) teaming for shared ports. Support for ClearPath Forward Windows network teaming is as follows:

Teaming Support for Dedicated PortsTeaming Support for Shared Ports

Load Balancing/Failover (LBFO)

Network Load Balancing (NLB)

Network Load Balancing (NLB)

Notes:

  • Dedicated ports are ports that are not shared.

  • Shared ports are also known as logical ports.

Linux Network Bonding and Teaming of NIC Ports

Linux operating system functionality provides the means to implement and manage both network bonding and teaming. For information on how to implement and manage network bonding and teaming, refer to Linux documentation.

Network Teaming / Network Bonding of InfiniBand HCA Ports

In general, Windows network teaming for Network Load Balancing (NLB) or Load Balancing/Failover (LBFO) is not supported for shared ports (Single Root I/O Virtualization, SR-IOV). However, with a Mellanox HCA driver upgrade in release 4.0, ClearPath Forward platforms have the capability of a vendor-specific type of network teaming on Windows that works for SR-IOV shared ports on Infiniband HCA ports. This flavor of teaming supplied by Mellanox only provides failover capability, and does not provide load balancing.

Note: Do not confuse the Mellanox-supplied variant of network teaming with the Microsoft Windows-supplied network teaming for NLB or LBFO.