Questions tagged [mellanox]

Mellanox Technologies (NASDAQ: MLNX) offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application run time and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services.

Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability.

Founded in 1999, Mellanox Technologies is headquartered in Sunnyvale, California and Yokneam, Israel.

63 questions
3
votes
1 answer

What is the replacement of libibverbs/librdmacm for Windows?

We have our application running on Linux and using RMDA (Infiniband) interface for communication between two modules. Now we like to support our application on Windows and hence looking for IB Verbs replacement. We tried installing Mellanox Drivers…
dudhaniss
  • 51
  • 5
3
votes
1 answer

What is the meaning of IB read, IB write, OB read and OB write. They came as output of Intel® PCM while monitoring PCIe bandwidth

I am trying to measure the PCIe bandwidth of NIC devices using Intel® Performance Counter Monitor (PCM) tools. But, I am not able to understand the output of it. To measure the PCIe bandwidth, I executed the binary pcm-iio. This binary helps to…
2
votes
2 answers

DPDK IPv4 Flow Filtering on Mellanox

I have a DPDK application that used Boost asio to join a multicast group and receives multicast IPv4 UDP packets over a VLAN on a particular UDP port number (other UDP ports are also used for other traffic). I am trying to receive only those…
Leon He
  • 21
  • 2
2
votes
2 answers

RDMA Scatter/Gather in verbs API

RDMA Scatter/Gather is a nice way to consolidate data transfers. For example, verbs API allows data at multiple locations to be written in a remote buffer with a SINGLE RDMA write operation; or, data in a remote buffer could be read to multiple…
Weijia Song
  • 153
  • 8
2
votes
1 answer

Mapping remote memory into the address space of a host using Inifiniband

I recently started to work with Infiniband cards, two Mellanox Technologies MT27700 Family [ConnectX-4] to be specific. Eventually, I want to extend an existing framework with interfaces based on the VPI Verbs API/RDMA CM API. About the research I…
Silicon1602
  • 1,151
  • 1
  • 7
  • 18
2
votes
1 answer

increase Memory Translation Table (MTT) for mellanox Connect-IB card

I have a fat node which has 2TB memory. With the new Connect-IB card, I want to increase the MTT, so I could register a large memory region. I found this post for HowTo Increase Memory Size used by Mellanox Adapters , but it didn't mention how to…
Zack
  • 1,205
  • 2
  • 14
  • 38
2
votes
1 answer

rsocket - RDMA socket API - client unable to connect to server

I have written simple client & server program using rsocket - RDMA socket API using following version of librdmacm-dev & librdmacm1 package (using Ubuntu 14.04): librdmacm1/trusty 1.0.16-1 i386 librdmacm-dev/trusty 1.0.16-1 i386 When server is…
2
votes
1 answer

what does mellanox interrupt mlx4-async@pci:0000 ... means?

I'm using an InfiniBand Mellanox card [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE] with OFED version 4-1.0.0 on an ubuntu 3.13.0 running on a x86_64 computer with 4 cores. Here is the result of ibstat on my computer CA 'mlx4_0' CA type:…
Fopa Léon Constantin
  • 11,863
  • 8
  • 48
  • 82
1
vote
0 answers

nvidia_p2p_get_pages() failing with error code -22

I am implementing NVIDIA GDS with the following hardware config: Ubuntu 22.04 CUDA 12.1 Nvidia Drivers 530.30.2 MLNX driver - 5.8.0 NVIDA GeForce RTX 3090 Samsung 980 DC NVMe drive. IOMMU is disabled PCIe bar has been resized to that of VRAM…
1
vote
1 answer

ConnectX-5 DPDK performance degradation when enabling jumbo, scatter and multi segs

I’m using a ConnectX-5 nic. I have a DPDK application on which I want to support jumbo packets. To do that I add rx offload capabilities: DEV_RX_OFFLOAD_JUMBO_FRAME, DEV_RX_OFFLOAD_SCATTER And tx offload capabilities: DEV_TX_OFFLOAD_MULTI_SEGS I…
hudac
  • 2,584
  • 6
  • 34
  • 57
1
vote
1 answer

softlockup with dpdk 19 mellanox connectx5

I have server with centos 7.9 3.10.0-1160.53.1.el7.x86_64 When running my dpdk 19 muliple process application i have softlockup The server i run on have 2 ixgbe 10G, and one 100G connectx-5 /home/testpmd --no-affinity -l 1-62 -n 4 --proc-type…
yaron
  • 439
  • 6
  • 16
1
vote
1 answer

DPDK: MPLS packet processing

I am trying to build a multi-RX-queue dpdk program, using RSS to split the incoming traffic into RX queues on a single port. Mellanox ConnectX-5 and DPDK Version 19.11 is used for this purpose. It works fine when I use IP over Ethernet packets as…
1
vote
0 answers

XDP and eBPF performance with AMD EPYC CPU

I'm always running XDP applications on servers with Intel Xeon Gold CPU's, performance was always good and was not a problem - up to 125Mpps with 100 GbE MCX515A-CCAT network card and 2 CPU's inside 1U server. Today I was trying to make it work on…
1
vote
2 answers

How can I receive Ethernet frames with ibverbs?

I want to write a simple test program to receive Ethernet frames using the ibverbs API. The code below compiles and runs but never receives any packets. I'm using Mellanox ConnectX-3 hardware on Ubuntu 18. Questions: If, while running this RX…
Andrew Bainbridge
  • 4,651
  • 3
  • 35
  • 50
1
vote
0 answers

"Invalid module format" error while loading a module in CentOS 6.6

I have 2 twin servers, with same hardware (Infiniband and Nvidia Tesla) and same OS (CentOS6.6, kernel and drivers). On host1 everything is working fine as usual, while on host2 I cannot run anymore this service, because I get this…
Stefano.C
  • 11
  • 3
1
2 3 4 5