rss logo

Improve the performance of its network with EtherChannel

Cisco logo

We will see here how to improve the performance of its network with the EtherChannel technology coupled with Link Aggregation Control Protocol (LACP) negotiation protocol.

For information and according to Wikipedia : EtherChannel is a port link aggregation technology. We can use up to 8 active ports a total bandwidth of 800 Mbit/s, 8 Gbit/s or 80 Gbit/s is possible depending on port speed.

The LACP main features :

Configuration

  • Switch model : Cisco Catalyst 1000 Series Switches

EtherChannel on two Switches

Configure

Let's start with a simple configuration where we have two vlan (1 and 2 to stay simple) and a trunk interface between two switches. In order to improve the bandwidth between them, we will create a EtherChannel on the trunk interface composed of two Gigabit links.

Cisco Etherchannel architecture with two switches

⚠️ The interfaces GigabitEthernet1/0/25 and GigabitEthernet1/0/26 of both switches must have exactly the same configuration to work (trunk, allowed vlan etc…).⚠️

  • Configure switch01 :
switch01(config)# interface range GigabitEthernet1/0/25-26 switch01(config-if-range)# switchport mode trunk switch01(config-if-range)# switchport trunk allowed none switch01(config-if-range)# switchport trunk allowed vlan 1,2 switch01(config-if-range)# channel-protocol lacp switch01(config-if-range)# channel-group 1 mode passive
  • Configure switch02 :
switch02(config)# interface range GigabitEthernet1/0/25-26 switch02(config-if-range)# switchport mode trunk switch01(config-if-range)# switchport trunk allowed none switch02(config-if-range)# switchport trunk allowed vlan 1,2 switch02(config-if-range)# channel-protocol lacp switch02(config-if-range)# channel-group 1 mode active
  • We can now configure etherchannel interface like any other interface using the name Po 1 :
switch02(config)# interface port-channel 1

Check

  • Verify the EtherChannel state :
switch01# show etherchannel summary Flags: D - down P - bundled in port-channel I - stand-alone s - suspended H - Hot-standby (LACP only) R - Layer3 S - Layer2 U - in use N - not in use, no aggregation f - failed to allocate aggregator M - not in use, minimum links not met m - not in use, port not aggregated due to minimum links not met u - unsuitable for bundling w - waiting to be aggregated d - default port A - formed by Auto LAG Number of channel-groups in use: 1 Number of aggregators: 1 Group Port-channel Protocol Ports ------+-------------+-----------+----------------------------------------------- 1 Po1(SU) LACP Gi1/0/25(P) Gi1/0/26(P)
  • Verify the load-balancing method currently in use :
switch01# show etherchannel load-balance EtherChannel Load-Balancing Configuration: src-dst-ip EtherChannel Load-Balancing Addresses Used Per-Protocol: Non-IP: Source XOR Destination MAC address IPv4: Source XOR Destination IP address IPv6: Source XOR Destination IP address
  • Verify how effectively a configured load-balancing method is performing :
switch01# show etherchannel port-channel

Misc

  • Configure Load Balancing method :
switch01(config)# port-channel load-balance src-dst-port

EtherChannel on a Windows Server

Microsoft Logo

EtherChannel is also possible with Windows Servers. We will see an example of a server with two network interfaces.

Cisco Etherchannel architecture with a Windows Server

Cisco Switch

  • Configure switch01 :
switch01(config)# interface range GigabitEthernet1/0/25-26 switch01(config-if)# switchport mode access switch01(config-if)# switchport access vlan 2 switch01(config-if)# channel-protocol lacp switch01(config-if)# channel-group 2 mode passive
  • We can now configure etherchannel interface like any other interface using the name Po 2 :
switch01(config)# interface port-channel 2

Windows Server

⚠️ Be aware that creating the NIC Teaming interface will remove the configuration of the current networks cards. So don't do this operation from a remote access, otherwise you will lose access to the server. ⚠️

GUI

  • From Server Manager, click on NIC Teaming link :
Windows Server Manager console, Nic Teaming link
  • From NIC Teaming window, create a NewTeam :
Windows Server Manager, Create a New Nic Team
  • Give a name, set LACP Teaming mode and Address Hash as Load balancing mode :
Windows Sever Nic Teaming window
  • Check that everything is fine :
Windows Server Nic Teaming console
  • From Network Connections, set your Team Interface as any network interface :
Windows Network Connections with a NicTeam network adapter

PowerShell

  • You can do the same with one PowerShell command line :
PS C:\ > New-NetLBFOTeam -LoadBalancingAlgorithm IPAddresses -TeamingMode Lacp -Name NewTeam -TeamMembers Ethernet0,Ethernet1 -Confirm:$false

EtherChannel on a GNU/Linux Server

Debian Logo Note : I haven't tested it in the real world yet GNU/Linux Bonding architecture

Cisco Switch

  • Configure switch01 :
switch01(config)# interface range GigabitEthernet1/0/25-26 switch01(config-if)# switchport mode access switch01(config-if)# switchport access vlan 2 switch01(config-if)# channel-protocol lacp switch01(config-if)# channel-group 2 mode active
  • We can now configure etherchannel interface like any other interface using the name Po 2 :
switch01(config)# interface port-channel 2

Debian Server

  • Install ifenslave package :
root@host:~# apt-get install ifenslave
  • Edit /etc/network/interfaces file, and create bond0 interface. Here with the physical interfaces eth0 and eth1 :
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). source /etc/network/interfaces.d/* # The loopback network interface auto lo iface lo inet loopback auto bond0 iface bond0 inet static address 192.168.1.200 netmask 255.255.255.0 network 192.168.1.0 gateway 192.168.1.254 bond-slaves eth0 eth1 #4 for LACP/802.3ad bond-mode 4 #frequency of link status check in milliseconds bond-miimon 100 bond-downdelay 200 bond-updelay 200 bond-lacp-rate fast #mac and ip bond-xmit-hash-policy layer2+3
  • Reboot :
root@host:~# reboot
  • Check bond0 interface is up :
root@host:~# ip address show dev bond0 4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 72:17:b9:8d:1e:ad brd ff:ff:ff:ff:ff:ff inet 192.168.1.200/24 brd 192.168.1.255 scope global bond0 valid_lft forever preferred_lft forever inet6 fe80::7017:b9ff:fe8d:1ead/64 scope link valid_lft forever preferred_lft forever
  • Get bond0 interface informations :
root@host:~# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v5.10.0-9-amd64 Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2+3 (2) II Status: up II Polling Interval (ms): 100 Up Delay (ms): 200 Down Delay (ms): 200 Peer Notification Delay (ms): 0 802.3ad info LACP rate: fast in links: 0 Aggregator selection policy (ad_select): stable System priority: 65535 System MAC address: 72:17:b9:8d:1e:ad Active Aggregator Info: Aggregator ID: 1 Number of ports: 1 Actor Key: 15 Partner Key: 1 Partner Mac Address: 00:00:00:00:00:00 root@host:~# cat /sys/class/net/bond0/bonding/mode 802.3ad 4

References

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Contact :

contact mail address