Intel i350-T4 network performance on jumbo network with iSCSI
The issue is that iSCSI performance is poor. iPerf tests confirms that with network frame > 3k the performance rapidly reduces from 95% of line speed towards 25% of line speed. RX drops can be observed with ifconfig.
The configuration is:
- an i350-T4 on Linux (Xen dom0)
- a Dell 5324 switch
- an i350-T4 on FreeBSD/FreeNAS
- 9k jumbo frames support
The NICs have a VLAN and no link aggregation.
Issues considered
- irqbalance
- Dom0 getting a limited memory and cpus
- flow control on the NICs and switch
- RX/TX ring parameters (ethtool -g)
- rx-flow-hash (ethtool -n)
- PCI transfer limits (Maximum Memory Read Byte Count)
Links
- https://www.kernel.org/doc/Documentation/networking/igb.txt
- https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt
- http://wiki.xen.org/wiki/Network_Throughput_and_Performance_Guide
- http://dak1n1.com/blog/7-performance-tuning-intel-10gbe/
- http://xenserver.org/blog/entry/iscsi-and-jumbo-frames.html
- https://sourceforge.net/p/e1000/mailman/message/31572592/
- http://www.hep.man.ac.uk/u/rich/net/NIC_tests_10GE_www/mmrbc.html
- http://staff.psc.edu/benninge/networking/10GbE/test_results.html
- https://communities.intel.com/message/221674#221674
- http://www.redhat.com/promo/summit/2008/downloads/pdf/Thursday/Mark_Wagner.pdf
Appendices
RX drops
# ifconfig enp7s0f0 enp7s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000 inet6 2001:4428:225:1:2e53:4aff:fe00:ff6 prefixlen 64 scopeid 0x0<global> inet6 fd0c:898b:471c:1:2e53:4aff:fe00:ff6 prefixlen 64 scopeid 0x0<global> inet6 fe80::2e53:4aff:fe00:ff6 prefixlen 64 scopeid 0x20<link> ether 2c:53:4a:00:0f:f6 txqueuelen 1000 (Ethernet) RX packets 1880091 bytes 2897552632 (2.6 GiB) RX errors 0 dropped 469012 overruns 0 frame 0 TX packets 973354 bytes 2335977388 (2.1 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xfeb00000-febfffff
PCI Info
# lspci -nv 07:00.3 0200: 8086:1521 (rev 01) Subsystem: 8086:0001 Flags: bus master, fast devsel, latency 0, IRQ 19 Memory at fdf00000 (32-bit, non-prefetchable) [size=1M] Memory at feaf0000 (32-bit, non-prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+ Capabilities: [70] MSI-X: Enable+ Count=10 Masked- Capabilities: [a0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [140] Device Serial Number a0-36-9f-ff-ff-a1-25-1c Capabilities: [150] Alternative Routing-ID Interpretation (ARI) Capabilities: [160] Single Root I/O Virtualization (SR-IOV) Capabilities: [1a0] Transaction Processing Hints Capabilities: [1d0] Access Control Services Kernel driver in use: igb Kernel modules: igb
Default MMRBC PCI
Show the default value for MMRBC for all four NIC by PCI ID (see above for getting the PCI ID for the NIC).
# setpci -d 8086:1521 e6.b 00 00 00 00
MMRBC Values
MM value in bytes 22 512 (default) 26 1024 2a 2048 2e 4096
ESXi machine
# esxcfg-nics -l Name PCI Driver Link Speed Duplex MAC Address MTU Description vmnic0 0000:00:19.0 e1000e Up 1000Mbps Full 94:de:80:af:1a:60 9000 Intel Corporation 82579V Gigabit Network Connection vmnic1 0000:03:00.0 igb Down 0Mbps Full a0:36:9f:18:3a:a4 9000 Intel Corporation I350 Gigabit Network Connection vmnic2 0000:03:00.1 igb Down 0Mbps Full a0:36:9f:18:3a:a5 9000 Intel Corporation I350 Gigabit Network Connection vmnic3 0000:01:00.0 igb Up 1000Mbps Full 2c:53:4a:00:0f:f6 9000 Intel Corporation I350 Gigabit Network Connection vmnic4 0000:01:00.1 igb Up 1000Mbps Full 2c:53:4a:00:0f:f7 9000 Intel Corporation I350 Gigabit Network Connection vmnic5 0000:01:00.2 igb Up 1000Mbps Full 2c:53:4a:00:0f:f8 9000 Intel Corporation I350 Gigabit Network Connection vmnic6 0000:01:00.3 igb Up 1000Mbps Full 2c:53:4a:00:0f:f9 9000 Intel Corporation I350 Gigabit Network Connection