This one bugged me for a while. Cause it was so miss-leading to debug. Most of the time when faced with connection loss on larger packages one immediately thinks: Damn it, bitten again by PMTU, have to fix the MTU all the way in and out. Recently I got the exact same behaviour on a OpenStack cluster with VLAN provider networking on 10GbE bonds (port channels/lacp/bond4).
On a node accessible from the target create two files:
head -c 1600 /dev/urandom > 1600.txt head -c 500 /dev/urandom > 500.txt
1600 bytes is big enough to cause problems with fragmentation. (default MTU is 1500 almost everywhere), increase the file size if you have jumbo frames enabled on your path.
Then e.g. scp the packet.
you will see, that copying the small packet will succeed
scp admin@10.10.91.1:500.txt .
admin@10.10.91.1's password:
500.txt 100% 500 0.5KB/s 00:00
but copying the large file will stall
scp admin@10.10.91.1:1600.txt . admin@10.10.91.1's password: 1600.txt 0% 0 0.0KB/s - stalled -^C
This is how a MTU problem reveals itself.
Checked everything, played with various fixes that helped in the past.
Things like:
iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
does not help
Then I got a hint from a friend (Thore Bahr) to have a look into rx-vlan-offload
. I did so, but that did not help.
But, this triggered further investigation and more deep digging into the root cause.
Finally it revealed itself derived from some other observations:
=> So there is an issue with the lro setting and bonding with Intel igbxe adapters. We have to turn off lro
From the [Base Driver for the Intel® Ethernet 10 Gigabit PCI Express Family of Adapters README](http://downloadmirror.intel.com/22919/eng/README.txt:
IMPORTANT NOTE
WARNING: The ixgbe driver compiles by default with the LRO (Large Receive Offload) feature enabled. This option offers the lowest CPU utilization for receives, but is completely incompatible with routing/ip forwarding and bridging. If enabling ip forwarding or bridging is a requirement, it is necessary to disable LRO using compile time options as noted in the LRO section later in this document. The result of not disabling LRO when combined with ip forwarding or bridging can be low throughput or even a kernel panic.
ethtool
:First try to get a connection to proof it is not working
$ ip netns exec qdhcp-9d444bee-0395-47d9-ae7e-ae315c25e088 ssh 50.0.0.9
Change the settings with ethtool -K <adapter> lro off
$ ethtool -K p3p1 lro off $ ethtool -K p3p2 lro off $ ethtool -k p3p1 Offload parameters for eth6: rx-checksumming: on tx-checksumming: on scatter-gather: on tcp-segmentation-offload: on udp-fragmentation-offload: off generic-segmentation-offload: on generic-receive-offload: on large-receive-offload: off rx-vlan-offload: on tx-vlan-offload: on ntuple-filters: off receive-hashing: on
This was NOT working, now it works:
$ ip netns exec qdhcp-9d444bee-0395-47d9-ae7e-ae315c25e088 ssh 50.0.0.9 'uptime' Warning: Permanently added '50.0.0.9' (ECDSA) to the list of known hosts. 17:34pm up 10 days 3:20, 0 users, load average: 0.00, 0.01, 0.05
http://www.novell.com/support/kb/doc.php?id=7013089 suggests adding the following options to the network config:
ETHTOOL_OPTIONS='-K iface lro off'
We have to add this to the automation for all interfaces in a/the bond.
Finally since we use Chef we can push down this setting to all compute nodes:
knife ssh roles:*comp* -- ethtool -K p3p1 lro off knife ssh roles:*comp* -- ethtool -k p3p1 |grep large df0-xx-xx-xx-aa-aa.test-openstack.org large-receive-offload: off df0-xx-xx-xx-aa-aa.test-openstack.org large-receive-offload: off [ .. snipped .. ] df0-xx-xx-xx-aa-aa.test-openstack.org large-receive-offload: off
Interested in Cloud or Chef Trainings? Have a look at our Commandemy Trainings page. Need help migrating to the cloud? Check out Infralovers.
comments powered by Disqus