在极限并发时,系统自动分配CPU负载均衡,并不能最大效率,这个时候就要使用网卡软中断绑定CPU,高端网卡支持队列可以绑定队列到CPU核心,来优化Linux网络吞吐的效率。
本例使用intel D525 CPU
查看irq资源
[root@LeeRouter ~]# cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
0: 138 3 4 7 IO-APIC-edge timer
1: 1 0 0 1 IO-APIC-edge i8042
8: 0 1 0 0 IO-APIC-edge rtc0
9: 1 0 1 1 IO-APIC-fasteoi acpi
12: 0 1 2 1 IO-APIC-edge i8042
14: 0 0 0 0 IO-APIC-edge ata_piix
15: 0 0 0 0 IO-APIC-edge ata_piix
16: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb3
18: 869 860 871 877 IO-APIC-fasteoi ehci_hcd:usb1, uhci_hcd:usb7, ata_piix
19: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb6
21: 8 9 9 8 IO-APIC-fasteoi uhci_hcd:usb4
23: 0 0 0 0 IO-APIC-fasteoi ehci_hcd:usb2, uhci_hcd:usb5
24: 0 0 0 0 PCI-MSI-edge pciehp
25: 0 0 0 0 PCI-MSI-edge pciehp
26: 0 0 0 0 PCI-MSI-edge pciehp
27: 0 0 0 0 PCI-MSI-edge pciehp
28: 0 0 0 0 PCI-MSI-edge pciehp
29: 0 0 0 0 PCI-MSI-edge pciehp
30: 2 3 2 2 PCI-MSI-edge i915
31: 254814 360483 272099 272226 PCI-MSI-edge eth0
32: 518 504 740 524 PCI-MSI-edge eth1
33: 369892 269639 243485 243150 PCI-MSI-edge eth2
NMI: 35 43 32 35 Non-maskable interrupts
LOC: 67123 43720 59099 90278 Local timer interrupts
SPU: 0 0 0 0 Spurious interrupts
PMI: 35 43 32 35 Performance monitoring interrupts
IWI: 0 0 0 0 IRQ work interrupts
RES: 613 555 402 473 Rescheduling interrupts
CAL: 848 1010 137 157 Function call interrupts
TLB: 451 414 1140 1105 TLB shootdowns
TRM: 0 0 0 0 Thermal event interrupts
THR: 0 0 0 0 Threshold APIC interrupts
MCE: 0 0 0 0 Machine check exceptions
MCP: 16 16 16 16 Machine check polls
可以看到eth0 eth1 eth2是平均负载在4个CPU核心上
echo 0 > /proc/irq/31/smp_affinity_list
echo 1 > /proc/irq/32/smp_affinity_list
echo 2 > /proc/irq/33/smp_affinity_list
分配网卡软中断到不同的CPU核心。
重启会失效,所以将命令写入 /etc/rc.local
vi /etc/rc.local
echo 0 > /proc/irq/31/smp_affinity_list
echo 1 > /proc/irq/32/smp_affinity_list
echo 2 > /proc/irq/33/smp_affinity_list