119. VF Stacked Bonded

Stacked bonded mechanism allow a bonded port to be added to another bonded port.

The demand arises from a discussion with a prospective customer for a 100G NIC based on RRC. The customer already uses Mellanox 100G NICs. Mellanox 100G NICs support a proper x16 PCIe interface so the host sees a single netdev and that netdev corresponds directly to the 100G Ethernet port. They indicated that in their current system they bond multiple 100G NICs together, using DPDK bonding API in their application. They are interested in looking at an alternative source for the 100G NIC and are in conversation with Silicom who are shipping a 100G RRC based NIC (something like Boulder Rapids). The issue they have with RRC NIC is that the NIC presents as two PCIe interfaces (netdevs) instead of one. If the DPDK bonding could operate at 1st level on the two RRC netdevs to present a single netdev could the application then bond multiple of these bonded interfaces to implement NIC bonding.

119.1. Prerequisites

119.1.1. hardware configuration

all link ports of tester/dut should be the same data rate and support full-duplex.

NIC/DUT/TESTER ports requirements:

  • Tester: 2/4 ports of nic
  • DUT: 2/4 ports of nic

enable link-down-on-close in tester:

ethtool --set-priv-flags {tport_iface0} link-down-on-close on
ethtool --set-priv-flags {tport_iface1} link-down-on-close on
ethtool --set-priv-flags {tport_iface2} link-down-on-close on
ethtool --set-priv-flags {tport_iface3} link-down-on-close on

create 1 vf for 4 dut ports:

echo 0 > /sys/bus/pci/devices/0000\:31\:00.0/sriov_numvfs
echo 0 > /sys/bus/pci/devices/0000\:31\:00.1/sriov_numvfs
echo 0 > /sys/bus/pci/devices/0000\:31\:00.2/sriov_numvfs
echo 0 > /sys/bus/pci/devices/0000\:31\:00.3/sriov_numvfs

disabel spoofchk for VF:

ip link set dev {pf0_iface} vf 0 spoofchk off
ip link set dev {pf1_iface} vf 0 spoofchk off
ip link set dev {pf2_iface} vf 0 spoofchk off
ip link set dev {pf3_iface} vf 0 spoofchk off

port topology diagram(4 peer links):

TESTER                                          DUT
             physical link              logical link
.---------.                .------------------------------------------------.
| portA 0 | <------------> | portB pf0vf0 <---> .--------.                  |
|         |                |                    | bond 0 | <-----> .------. |
| portA 1 | <------------> | portB pf1vf0 <---> '--------'         |      | |
|         |                |                                       |bond2 | |
| portA 2 | <------------> | portB pf2vf0 <---> .--------.         |      | |
|         |                |                    | bond 1 | <-----> '------' |
| portA 3 | <------------> | portB pf3vf0 <---> '--------'                  |
'---------'                '------------------------------------------------'

119.2. Test cases

tx-offloads value set based on nic type. Test cases’ steps, which run for slave down testing, are based on 4 ports. Other test cases’ steps are based on 2 ports.

119.3. Test Case: basic behavior

allow a bonded port to be added to another bonded port, which is supported by:

balance-rr    0
active-backup 1
balance-xor   2
broadcast     3
balance-tlb   5
balance-alb   6
  1. 802.3ad mode is not supported if one or more slaves is a bond device.
  2. add the same device twice to check exceptional process is good.
  3. master bonded port/each slaves queue configuration is the same.

119.3.1. steps

  1. bind two ports:

    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
    
  2. boot up testpmd, stop all ports:

    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX>
    testpmd> port stop all
    
  3. create first bonded port and add one slave, check bond 2 config status:

    testpmd> create bonded device <mode> 0
    testpmd> add bonding slave 0 2
    testpmd> show bonding config 2
    
  4. create second bonded port and add one slave, check bond 3 config status:

    testpmd> create bonded device <mode> 0
    testpmd> add bonding slave 1 3
    testpmd> show bonding config 3
    
  5. create third bonded port and add first/second bonded port as its’ slaves. check if slaves are added successful. stacked bonded is forbidden by mode 4, mode 4 will fail to add a bonded port as its’ slave:

    testpmd> create bonded device <mode> 0
    testpmd> add bonding slave 2 4
    testpmd> add bonding slave 3 4
    testpmd> show bonding config 4
    
  6. check master bonded port/slave port’s queue configuration are the same:

    testpmd> show bonding config 0
    testpmd> show bonding config 1
    testpmd> show bonding config 2
    testpmd> show bonding config 3
    testpmd> show bonding config 4
    
  7. start top level bond port to check ports start action:

    testpmd> port start 4
    testpmd> start
    
  8. close testpmd:

    testpmd> stop
    testpmd> quit
    
  9. repeat upper steps with the following mode number:

    balance-rr    0
    active-backup 1
    balance-xor   2
    broadcast     3
    802.3ad       4
    balance-tlb   5
    

119.4. Test Case: active-backup stacked bonded rx traffic

setup dut/testpmd stacked bonded ports, send tcp packet by scapy and check testpmd packet statistics.

119.4.1. steps

  1. bind two ports:

    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
    
  2. boot up testpmd, stop all ports:

    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX>
    testpmd> port stop all
    
  3. create first bonded port and add one port as slave:

    testpmd> create bonded device 1 0
    testpmd> add bonding slave 0 2
    
  4. create second bonded port and add one port as slave:

    testpmd> create bonded device 1 0
    testpmd> add bonding slave 1 3
    
  5. create third bonded port and add first/second bonded ports as its’ slaves, check if slaves are added successful:

    testpmd> create bonded device 1 0
    testpmd> add bonding slave 2 4
    testpmd> add bonding slave 3 4
    testpmd> show bonding config 4
    
  6. start top level bond port:

    testpmd> port start 4
    testpmd> start
    
  7. send 100 tcp packets to portA 0 and portA 1:

    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 0>)
    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 2>)
    
  8. first/second bonded port should receive 100 packets, third bonded port should receive 200 packets:

    testpmd> show port stats all
    
  9. close testpmd:

    testpmd> stop
    testpmd> quit
    

119.5. Test Case: active-backup stacked bonded rx traffic with slave down

setup dut/testpmd stacked bonded ports, set one slave of 1st level bonded port to down status, send tcp packet by scapy and check testpmd packet statistics.

119.5.1. steps

  1. bind four ports:

    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2> <pci address 3> <pci address 4>
    
  2. boot up testpmd, stop all ports:

    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX>
    testpmd> port stop all
    
  3. create first bonded port and add two ports as slaves:

    testpmd> create bonded device 1 0
    testpmd> add bonding slave 0 4
    testpmd> add bonding slave 1 4
    
  4. set portB 0 down:

    ethtool --set-priv-flags {portA 0} link-down-on-close on
    ifconfig {portA 0} down
    

Note

The vf port link status cannot be changed directly. Change the peer port to make the vf port link down.

  1. create second bonded port and add two ports as slaves:

    testpmd> create bonded device 1 0
    testpmd> add bonding slave 2 5
    testpmd> add bonding slave 3 5
    
  2. set portB 2 down:

    ethtool --set-priv-flags {portA 2} link-down-on-close on
    ifconfig {portA 2} down
    

Note

The vf port link status cannot be changed directly. Change the peer port to make the vf port link down.

  1. create third bonded port and add first/second bonded port as its’ slaves, check if slave is added successful:

    testpmd> create bonded device 1 0
    testpmd> add bonding slave 4 6
    testpmd> add bonding slave 5 6
    testpmd> show bonding config 6
    
  2. start top level bond port:

    testpmd> port start 6
    testpmd> start
    
  3. send 100 packets to portB pf0vf0/portB pf1vf0/portB pf3vf0/portB pf4vf0 separately:

    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portB pf0>)
    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 1>)
    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portB pf2>)
    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 3>)
    
  4. check first/second bonded ports should receive 100 packets, third bonded device should receive 200 packets.:

    testpmd> show port stats all
    
  5. close testpmd:

    testpmd> stop
    testpmd> quit
    

119.6. Test Case: balance-xor stacked bonded rx traffic

setup dut/testpmd stacked bonded ports, send tcp packet by scapy and check packet statistics.

119.6.1. steps

  1. bind two ports:

    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2>
    
  2. boot up testpmd, stop all ports:

    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX>
    testpmd> port stop all
    
  3. create first bonded port and add one port as slave:

    testpmd> create bonded device 2 0
    testpmd> add bonding slave 0 2
    
  4. create second bonded port and add one port as slave:

    testpmd> create bonded device 2 0
    testpmd> add bonding slave 1 3
    
  5. create third bonded port and add first/second bonded ports as its’ slaves check if slaves are added successful:

    testpmd> create bonded device 2 0
    testpmd> add bonding slave 2 4
    testpmd> add bonding slave 3 4
    testpmd> show bonding config 4
    
  6. start top level bond port:

    testpmd> port start 4
    testpmd> start
    
  7. send 100 packets to portB pf0vf0/portB pf1vf0/portB pf3vf0/portB pf4vf0 separately:

    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 0>)
    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 1>)
    
  8. check first/second bonded port should receive 100 packets, third bonded device should receive 200 packets:

    testpmd> show port stats all
    
  9. close testpmd:

    testpmd> stop
    testpmd> quit
    

119.7. Test Case: balance-xor stacked bonded rx traffic with slave down

setup dut/testpmd stacked bonded ports, set one slave of 1st level bonded device to down status, send tcp packet by scapy and check packet statistics.

119.7.1. steps

  1. bind four ports:

    ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2> <pci address 3> <pci address 4>
    
  2. boot up testpmd, stop all ports:

    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX>
    testpmd> port stop all
    
  3. create first bonded port and add two ports as slaves:

    testpmd> create bonded device 2 0
    testpmd> add bonding slave 0 4
    testpmd> add bonding slave 1 4
    
  4. set portB 0 down:

    ethtool --set-priv-flags {portA 0} link-down-on-close on
    ifconfig {portA 0} down
    

Note

The vf port link status cannot be changed directly. Change the peer port to make the vf port link down.

  1. create second bonded port and add two ports as slaves:

    testpmd> create bonded device 2 0
    testpmd> add bonding slave 2 5
    testpmd> add bonding slave 3 5
    
  2. set portB 2 down:

    ethtool --set-priv-flags {portA 2} link-down-on-close on
    ifconfig {portA 2} down
    

Note

The vf port link status cannot be changed directly. Change the peer port to make the vf port link down.

  1. create third bonded port and add first/second bonded port as its’ slaves check if slave is added successful:

    testpmd> create bonded device 2 0
    testpmd> add bonding slave 4 6
    testpmd> add bonding slave 5 6
    testpmd> show bonding config 6
    
  2. start top level bond port:

    testpmd> port start 6
    testpmd> start
    
  3. send 100 packets to portB pf0vf0/portB pf1vf0/portB pf3vf0/portB pf4vf0 separately:

    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portB pf0>)
    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 1>)
    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portB pf2>)
    sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=<portA 3>)
    
  4. check first/second bonded port should receive 100 packets, third bonded device should receive 200 packets:

    testpmd> show port stats all
    
  5. close testpmd:

    testpmd> stop
    testpmd> quit