个人工具

UbuntuHelp:UbuntuLTSP/Trunking

来自Ubuntu中文

Wikibot讨论 | 贡献2009年5月18日 (一) 18:02的版本

(差异) ←上一版本 | 最后版本 (差异) | 下一版本→ (差异)
跳转至: 导航, 搜索

This wiki page is specific to Ubuntu Version(s): 8.10


Introduction

If you have a multiple NIC server, you may speed up your network by using bonding. Switches are able to transfer data in parallel, so by applying bonding to a 4 NIC server connected to a 100 Mbps switch the theoretical bandwidth increases to 400 Mbps.

Configuring your switch

If you have a managed switch, you may need to configure it in order for bonding to work.

Installation on Intrepid

1. Install ifenslave 2.6 and its ifupdown scripts:

apt-get install ifenslave

2. Take down the ethernet interfaces you're going to bond:

sudo ifdown eth0
sudo ifdown eth1
...

3. Removing all the ethX lines from /etc/network/interfaces and insert something like the following:

auto bond0
iface bond0 inet static
	address 10.160.31.10
	netmask 255.255.255.0
	gateway 10.160.31.1
	slaves all
	bond-mode 0
	bond-miimon 100

4. Bring up the bonding device:

sudo ifup bond0

Benchmarks

A real life example. On a lab with an 100 Mbps switch, 8 clients and a 4 NIC server, netperf 2.4.4-5ubuntu1 was installed in the chroot, and the following command was ran from the TC consoles:

netperf -c -C -H server -l 9999 -D 10,1

Average results:

client01    42 Mbps
client02    47 Mbps
client03    47 Mbps
client04    46 Mbps
client05    43 Mbps
client06    55 Mbps
client07    54 Mbps
client08    56 Mbps

netperf was running simultaneously in all the LTSP clients, so this sums up to about 390 Mbps overall bandwidth. That's very close to a full 4x speedup for 4 NICs. Of course the actual data transferred in a normal usage scenario would be less than that. So, video playback with totem was used as the next benchmark:

                191Mb           381Mb           572Mb           763Mb      954Mb
                191Mb           381Mb           572Mb           763Mb      954Mb
└───────────────┴───────────────┴───────────────┴───────────────┴───────────────
Unknown-00-03-b3-48-50-2c  => Unknown-00-c0-df-08-97-63  39.6Mb  39.2Mb  40.9Mb
                           <=                            1.89Mb  1.85Mb  1.96Mb
Unknown-00-03-b3-48-50-2c  => Unknown-00-03-47-ae-12-14  36.2Mb  35.0Mb  35.9Mb
                           <=                            1.72Mb  1.64Mb  1.70Mb
Unknown-00-03-b3-48-50-2c  => Unknown-00-50-fc-5d-61-7e  23.5Mb  34.9Mb  38.8Mb
                           <=                            1.10Mb  1.60Mb  1.78Mb
Unknown-00-03-b3-48-50-2c  => Unknown-00-50-bf-1a-0a-a8  25.1Mb  25.0Mb  25.4Mb
                           <=                            1.17Mb  1.18Mb  1.22Mb
Unknown-00-03-b3-48-50-2c  => Unknown-00-50-bf-2d-48-79  26.2Mb  24.9Mb  26.0Mb
                           <=                            1.24Mb  1.20Mb  1.25Mb
Unknown-00-03-b3-48-50-2c  => Unknown-00-50-bf-1a-16-7c  24.4Mb  24.7Mb  25.5Mb
                           <=                            1.17Mb  1.20Mb  1.22Mb
Unknown-00-03-b3-48-50-2c  => Unknown-00-50-bf-1a-16-85  24.3Mb  24.6Mb  25.7Mb
                           <=                            1.14Mb  1.17Mb  1.22Mb
Unknown-00-03-b3-48-50-2c  => Unknown-00-50-bf-d7-0f-86  26.3Mb  24.5Mb  24.8Mb
                           <=                            1.25Mb  1.18Mb  1.20Mb
────────────────────────────────────────────────────────────────────────────────
TX:             cumm:  60.8GB   peak:    262Mb  rates:    226Mb   233Mb   243Mb
RX:                    4.98GB           12.3Mb           10.7Mb  11.0Mb  11.6Mb
TOTAL:                 65.7GB            275Mb            236Mb   244Mb   255Mb

A peak of 275Mb is pretty good for a 100Mbps switch. And the 300 MHz Celeron CPUs of the clients were constantly at 100% usage, even with LDM_DIRECTX=True, so it's possible that with better CPUs it would go higher.

Bonding modes

In the 4th installation step above, instead of bond-mode 0, you may insert any of the following modes. The text below is from the ifenslave documentation: balance-rr or 0 Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance. active-backup or 1 Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. In bonding version 2.6.2 or later, when a failover occurs in active-backup mode, bonding will issue one or more gratuitous ARPs on the newly active slave. One gratuitous ARP is issued for the bonding master interface and each VLAN interfaces configured above it, provided that the interface has at least one IP address configured. Gratuitous ARPs issued for VLAN interfaces are tagged with the appropriate VLAN id. This mode provides fault tolerance. The primary option, documented below, affects the behavior of this mode. balance-xor or 2 XOR policy: Transmit based on the selected transmit hash policy. The default policy is a simple [(source MAC address XOR'd with destination MAC address) modulo slave count]. Alternate transmit policies may be selected via the xmit_hash_policy option, described below. This mode provides load balancing and fault tolerance. broadcast or 3 Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance. 802.3ad or 4 IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification. Slave selection for outgoing traffic is done according to the transmit hash policy, which may be changed from the default simple XOR policy via the xmit_hash_policy option, documented below. Note that not all transmit policies may be 802.3ad compliant, particularly in regards to the packet mis-ordering requirements of section 43.2.4 of the 802.3ad standard. Differing peer implementations will have varying tolerances for noncompliance. Prerequisites: 1. Ethtool support in the base drivers for retrieving

the speed and duplex of each slave. 2. A switch that supports IEEE 802.3ad Dynamic link

aggregation. Most switches will require some type of configuration to enable 802.3ad mode. balance-tlb or 5 Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave. Prerequisite: Ethtool support in the base drivers for retrieving the speed of each slave. balance-alb or 6 Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server. Receive traffic from connections created by the server is also balanced. When the local system sends an ARP Request the bonding driver copies and saves the peer's IP information from the ARP packet. When the ARP Reply arrives from the peer, its hardware address is retrieved and the bonding driver initiates an ARP reply to this peer assigning it to one of the slaves in the bond. A problematic outcome of using ARP negotiation for balancing is that each time that an ARP request is broadcast it uses the hardware address of the bond. Hence, peers learn the hardware address of the bond and the balancing of receive traffic collapses to the current slave. This is handled by sending updates (ARP Replies) to all the peers with their individually assigned hardware address such that the traffic is redistributed. Receive traffic is also redistributed when a new slave is added to the bond and when an inactive slave is re-activated. The receive load is distributed sequentially (round robin) among the group of highest speed slaves in the bond. When a link is reconnected or a new slave joins the bond the receive traffic is redistributed among all active slaves in the bond by initiating ARP Replies with the selected MAC address to each of the clients. The updelay parameter (detailed below) must be set to a value equal or greater than the switch's forwarding delay so that the ARP Replies sent to the peers will not be blocked by the switch. Prerequisites: 1. Ethtool support in the base drivers for retrieving

the speed of each slave. 2. Base driver support for setting the hardware

address of a device while it is open. This is required so that there will always be one slave in the team using the bond hardware address (the curr_active_slave) while having a unique hardware address for each slave in the bond. If the curr_active_slave fails its hardware address is swapped with the new curr_active_slave that was chosen.

See Also

  • UbuntuLTSP - Community Ubuntu LTSP Documentation.

External Links