I just posted the following question to ServerFault….and then realised there might be people out there in magical internetland who know the answer but never visit any of the SO sites, so i’ve posted it here too. Â Feel free to respond here on on serverfault.
In a recent upgrade (from Openstack Diablo on Ubuntu Lucid to Openstack Essex on Ubuntu Precise), we found that DNS packets were frequently (almost always) dropped on the bridge interface (br100). For our compute-node hosts, that’s a Mellanox MT26428 using the mlx4_en driver module.
1. Use an old lucid kernel (e.g. 2.6.32-41-generic). This causes other problems, in particular the lack of cgroups and the old version of the kvm and kvm_amd modules (we suspect the kvm module version is the source of a bug we’re seeing where occasionally a VM will use 100% CPU). We’ve been running with this for the last few months, but can’t stay here forever.
net.bridge.bridge-nf-call-arptables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-ip6tables = 0
Something I should have mentioned earlier – this happens even on machines that don’t have any openstack or even libvirt packages installed.Â Same hardware, same everything, but with not much more than the Ubuntu 12.04 base systemÂ installed.
On kernel 2.6.32-41-generic, the bridge works as expected.
On kernel 3.2.0-29-generic, using the ethernet interface, it works perfectly.
Using a bridge on that same NIC fails unless net.bridge.bridge-nf-call-iptables=0
So, it seems pretty clear that the problem is either in the mellanox driver, the updated kernel’s bridging code, netfilter code.Â or some interaction between them.
Interestingly, I have other machines (without a mellanox card) with a bridge interface that don’t exhibit this problem.Â with NICs ranging from cheap r8169 cards to better quality broadcom tg3 Gbit cards in some Sun Fire X2200 M2 servers and intel gb cards in supermicro motherboards.Â Like our openstack compute nodes, they all use the bridge interfaces as their primary (or sometimes only) interface with an IP address – they’re configured that way so we can run VMs using libvirt & kvm with real IP addresses rather than NAT.
So, that indicates that the problem is specific to the mellanox driver, although the blog post I mentioned above had a similar problem with some broadcom NICs that used the bnx2 driver.