0

I'm trying to link 2 namespaces using between them a ovs-dpdk bridge, in this case trying to use net_pcap. This works (ping is ok between them), but there is a huge latency and no TCP connection between them.

Then I do another test, but using testpmd to create this link between veths (instead of usign ovs-dpdk), and the result is the same.

Using: Ubuntu 20.04.5, kernel 5.15.0-50, DPDK 22.07.0, OVS 3.0.90

The problem is that the latency between namespaces is to high when using ovs-dpdk or testpmd between, compared to direct connection via veth or a no-netdev bridge. And, when connected via ovs-dpdk or testpmd I can't run a simple iperf3 test (the connection fails).

  • Using namespace--veth--namespace: Ping latency is 0.05ms and iperf3 with 30Gbps
  • Using namespace--dpdk--namespace: Ping latency is 10ms and iperf3 no connection

FIRST TEST

ip link add dev ve-41 type veth peer name ve-42
ip link set dev ve-41 up
ip link set dev ve-42 up

ip netns add TESTER1
ip netns add TESTER2
ip link set ve-41 netns TESTER1
ip link set ve-42 netns TESTER2

ip netns exec TESTER1 ifconfig ve-41 13.13.13.1/24
ip netns exec TESTER2 ifconfig ve-42 13.13.13.2/24

ip netns exec TESTER1 ping 13.13.13.2
PING 13.13.13.2 (13.13.13.2) 56(84) bytes of data.
64 bytes from 13.13.13.2: icmp_seq=1 ttl=64 time=0.055 ms
64 bytes from 13.13.13.2: icmp_seq=2 ttl=64 time=0.055 ms
64 bytes from 13.13.13.2: icmp_seq=3 ttl=64 time=0.053 ms

SECOND TEST

ip link add dev ve-51 type veth peer name ve-52
ip link set dev ve-51 up
ip link set dev ve-52 up

ip link add dev ve-61 type veth peer name ve-62
ip link set dev ve-61 up
ip link set dev ve-62 up

ip netns add TESTER1
ip netns add TESTER2
ip link set ve-51 netns TESTER1
ip link set ve-62 netns TESTER2

ip netns exec TESTER1 ifconfig ve-51 13.13.13.1/24
ip netns exec TESTER2 ifconfig ve-62 13.13.13.2/24

dpdk-testpmd -l 5-8 -n 4 --vdev 'net_pcap0,iface=ve-52' --vdev='net_pcap1,iface=ve-61'

ip netns exec TESTER1 ping 13.13.13.2
PING 13.13.13.2 (13.13.13.2) 56(84) bytes of data.
64 bytes from 13.13.13.2: icmp_seq=1 ttl=64 time=10.7 ms
64 bytes from 13.13.13.2: icmp_seq=2 ttl=64 time=11.0 ms
64 bytes from 13.13.13.2: icmp_seq=3 ttl=64 time=8.63 ms

For the second test, instead of testpmd I was using ovs-dpdk with the same results:

sudo ovs-vsctl --no-wait add-br br0 -- set bridge br0 datapath_type=netdev
ovs-vsctl add-port br0 ve-51 -- set Interface ve-51 type=dpdk options:dpdk-devargs=net_pcap51,iface=ve-51
ovs-vsctl add-port br0 ve-62 -- set Interface ve-62 type=dpdk options:dpdk-devargs=net_pcap62,iface=ve-62

ovs-vsctl get Open_vSwitch . dpdk_initialized
true
ovs-vsctl get Open_vSwitch . other_config
{dpdk-init="true", dpdk-lcore-mask="0x7", dpdk-socket-mem="2048", pmd-cpu-mask="0x3"}

When Iperf3 doesn't connect (client to server), the server is obviously started. If I change the bridge from netdev to a normal bridge, iperf works (~30Gbps).

I don't know if this is a good/valid test, but I want to interconnect 2 namespaces with ovs-dpdk bridge. I was doing a similar test using xdp_redirect in How to apply XDP_REDIRECT to VETH peers

The question is how to add 2 namespaces with low latency to a ovs-dpdk bridge that can have another memif interface or a physical/external port via vfio too? They can work together wit low latency and throughput? Are the dpdk-devargs correct?

Topology TOPOLOGY

LuisK
  • 1
  • 1

0 Answers0