0

I made ready dpdk compatible environment and then I tried to send packets using dpdk-testpmd and wanted to see it being received in another server. I am using vfio-pci driver in no-IOMMU (unsafe) mode. I ran

$./dpdk-testpmd -l 11-15 -- -i

which had output like

EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
EAL: Using IOMMU type 8 (No-IOMMU)
EAL: Probe PCI driver: net_i40e (8086:1572) device: 0000:01:00.1 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_1>: n=179456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_0>: n=179456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
Port 0: E4:43:4B:4E:82:00
Checking link statuses...
Done

then

$set nbcore 4
Number of forwarding cores set to 4
testpmd> show config fwd
txonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 12 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=BE:A6:27:C7:09:B4

my nbcore is not being set correctly, even 'txonly' mode was not being set before I set the eth-peer addr. but some parameters are working. Moreover if I don't change the burst delay my server gets crashed as soon as I start transmitting through it has 10G ethernet port (80MBps available bandwidth by calculation). Hence, I am not seeing packets at receiving server by tailing tcpdump at corresponding receiving interface. What is happening here and what am I doing wrong?

Nafiul Alam Fuji
  • 407
  • 7
  • 17
  • Please always share DPDk version, if nic is used PMD and firmware version, current configuration. The question is not clear, and once you bind the port to DPDK igb_uio or vfio-pci, tcpdump will not work. But you re stating `tailing tcpdump`. is your question `I am unable rx or tx packets on DPDK port?` – Vipin Varghese Dec 01 '22 at 17:38
  • I am using DPDK 21.11.2 LTS. And I know tcpdump will not work with dpdk bined port, I was taking tcpdump from receiving side where dpdk is not running. I was using dpdk in sending server using testpmd and wanted to see that packet is being received in receiving side(later I will run testpmd as receiving). – Nafiul Alam Fuji Dec 02 '22 at 19:47
  • `I was using dpdk in sending server using testpmd and wanted to see that packet is being received in receiving side` thanks for clarifying. But looking at command line arguments or snap shot I can not find where your sending packets. For me `you are not using either right cmd line arguments or interactive arguments to start tx`. then you talk about server crash and all. So it is not clear what your real question is? – Vipin Varghese Dec 03 '22 at 03:51
  • So if you have crash, want to ask about crash. please share crash dump. So my request is `please make your question clear` So community can help you – Vipin Varghese Dec 03 '22 at 03:53
  • with limited information from both comments and actual questions, I have shared the answer for right steps to generate traffic with testpmd. Please go through the same and accept-upvote which will help others to find the right answer to similar query. – Vipin Varghese Dec 03 '22 at 06:21
  • Sorry that my ques wasnt clear to you. The thing is I did start port and configured that as txonly and then started. And there was no crash logs, the full server gets unreachable as soon as I do the start command. – Nafiul Alam Fuji Dec 03 '22 at 14:48
  • The thing is, It has 10G port but max available bandwidth is 80MB(measured by iperf). So I was trying to limit the bandwidth by reducing the burst length or packets per second just to make sure the sending limit stays under the available bandwdth as we think the port gets overwhelmed with packets and gets unreachable though we are maintaining ssh connection through other port. Thats why I asked command is not executing right. What is your suggestion to limit the sending limit under 80MB while txonly mode in the testpmd interactive mode? – Nafiul Alam Fuji Dec 03 '22 at 14:52
  • as mentioned in my earlier comments, I am not able to see any testpmd args or interactive runtime command which shows me you have configured the application in txonly or flwogen. So only answer to your problem the connection to remote server is not proper. Can you come for live debug now? – Vipin Varghese Dec 03 '22 at 14:53
  • please focus on the problem, if you have 10Gbps connection between the ports it will show up as 10Gbps and not 1Gbps. So either the cable is faulty or there is something wrong with your setup. Once again I ask you are yu ready for a live debug? – Vipin Varghese Dec 03 '22 at 14:55
  • Ok, I will share the my whole process as soon as I get near pc, I didn't share the whole command as I thought the post would be too big, and thanks a lot for the help – Nafiul Alam Fuji Dec 03 '22 at 14:56
  • I am not interested in whole process, your question is `unable to send and receive packets with testpmd with Intel Nic fortville`. My goal is to help you debug and get it resovled. – Vipin Varghese Dec 03 '22 at 15:13
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/250117/discussion-between-nafiul-alam-fuji-and-vipin-varghese). – Nafiul Alam Fuji Dec 03 '22 at 17:15
  • I am not comfortable in combining multiple questions and results into same chat, send me invite for google meet, or zoom for live debug. – Vipin Varghese Dec 04 '22 at 04:50
  • if you are free now I can send you a zoom link – Nafiul Alam Fuji Dec 04 '22 at 04:53
  • I have opened a room, just make me know if you are available Meeting ID: 715 8908 4797 Passcode: fF397j – Nafiul Alam Fuji Dec 04 '22 at 04:57
  • I am removed from the meeting, please send me google meet – Vipin Varghese Dec 04 '22 at 05:41
  • https://meet.google.com/zkj-tfty-mqy – Nafiul Alam Fuji Dec 04 '22 at 05:42

1 Answers1

1

based on the question & answers in the comments, the real intention is to send packets from DPDK testpmd using Intel Fortville (net_i40e) to the remote server. The real issue for traffic not being generated is neither the application command line nor the interactive option is set to create packets via dpdk-testpmd.

In order to generate packets there are 2 options in testpmd

  1. start tx_first: this will send out a default burst of 32 packets as soon the port is started.
  2. forward mode tx-only: this puts the port under dpdk-testpmd in transmission-only mode. once the port is start it will transmit packets with the default packet size.

Neither of these options is utilized, hence my suggestion is

  1. please have a walk through DPDK documentation on testpmd and its configuratio
  2. make use of either --tx-first or use --forward-mode=txonly as per DPDK Testpmd Command-line Options
  3. make use of either start txfirst or set fwd txonly or set fwd flwogen under interactive mode refer Testpmd Runtime Functions

with this traffic will be generated from testpmd and sent to the device (remote server). A quick example of the same is "dpdk-testpmd --file-prefix=test1 -a81:00.0 -l 7,8 --socket-mem=1024 -- --burst=128 --txd=8192 --rxd=8192 --mbcache=512 --rxq=1 --txq=1 --nb-cores=2 -a --forward-mode=io --rss-udp --enable-rx-cksum --no-mlockall --no-lsc-interrupt --enable-drop-en --no-rmv-interrupt -i"

From the above example config parameters

  • numbers of packets for rx-tx burst is set by --burst=128
  • number of rx-tx queues is configured by --rxq=1 --txq=1
  • number of cores to use for rx-tx is set by --nb-cores=2
  • to set flowgen, txonly, rxonly or io mode we use --forward-mode=io

hence in comments, it is mentioned neither set nbcore 4 or there are any configurations in testpmd args or interactive which shows the application is set for TX only.

The second part of the query is really confusing, because as it states

Moreover if I don't change the burst delay my server gets crashed as soon as I start transmitting through it has 10G ethernet port (80MBps available bandwidth by calculation). Hence, I am not seeing packets at receiving server by tailing tcpdump at corresponding receiving interface. What is happening here and what am I doing wrong?

assuming my server is the remote server to which packets are being sent by dpdk testpmd. because there is mention of I see packets with tcpdump (since Intel fortville X710 when bound with UIO driver will remove kernel netlink interface).

it is mentioned 80MBps which is around 0.08Gbps, is really strange. If the remote interface is set to promiscuous mode and there is AF_XDP application or raw socket application configured to receive traffic at line rate (10Gbps) it works. Since there is no logs or crash dump of the remote server, and it is highly unlikely actual traffic is generated from testpmd, this looks more of config or setup issue in remote server.

[EDIT-1] based on the live debug, it is confirmed

  1. The DPDK is not installed - fixed by using ninja isntall
  2. the DPDK nic port eno2 - is not connected to remote server directly.
  3. the dpdk nic port eno2 is connected through switch
  4. DPDk application testpmd is not crashing - confirmed with pgrep testpmd
  5. instead when used with set fwd txonly, packets flood the switch and SSH packets from other port is dropped.

Solution: please use another switch for data path testing, or use direct connection to remote server.

Vipin Varghese
  • 4,540
  • 2
  • 9
  • 25