PMC joint Mellanox promote NVMe over RDMA data transfer and P2P

Speed ​​up data transmission and efficient use of CPU and DDR bus pushed to the extreme is a good data center architecture evaluation criteria. Recently, PMC technology and Mellanox its NVRAM Joint High Speed ​​card company, to demonstrate the high-speed transmission instance NVMe over RDMA and P2P effective CPU and DDR bus will liberate resources and significantly enhance the data transfer speed. The joint demonstration consists of two parts, the first shows how to combine NVMe and RDMA, provided at the distal end of large-scale low-latency, high-performance, based NVM block access. Demos will be the second part of Mellanox's RDMA peer initiates operation of the PMC Flashtec NVRAM accelerator card integrated, the memory-mapped I/O (MMIO) RDMA as a goal to achieve distal massive persistent memory access. Each as described in detail below:

NVM Express over RDMA

NVMe over RDMA (NoR) NVMe agreement will extend to demonstrate the potential for RDMA over. The demonstration of the CCP using two computers, one as a client, then the other as the server - which is equipped with Mellanox ConnectX-3 Pro NIC, and are connected by RoCEv2. NVMe device presentations is used in high performance and low latency PMC Flashtec ™ NVRAM accelerator. The figure below shows a block diagram of the demo.


This demo shows the use of RDMA to send commands and data NVMe result brings little extra delay does not affect throughput.

Compare the average delay local NVMe device and a remote NVMe device, as shown in the table, NoR programs increased latency of less than 10 milliseconds.


The other set of data which is compared the test results of local NVMe device and a remote NVMe device throughput. As can be seen from the table below, there is no reduction in throughput NoR program.


Point to point transmission RDMA and PCIe devices

This demonstration, by increasing server CPU and DRAM on top of standard RDMA diversion, using peer-initiated approach to the NVRAM distal client and a server / NVMe devices are directly connected. We will provide Mellanox RoCEv2-capable ConnectX-3 Pro RDMA NIC combined with PMC's Flashtec NVRAM accelerator together to achieve peer initiated between NIC and NVRAM operation. Peer initiates operations can achieve remote clients direct access to the NVRAM accelerator card, compared to traditional RDMA process, reduce latency, and effectively frees CPU and DRAM resources.


Similarly, the demo uses two computers, one as a client, and the other is as a server. Use server-side PCIe switching device can initiate will greatly enhance the performance of operations.

And on comparison with traditional RDMA RDMA server peer end DRAM bandwidth available backstage with perftest data obtained as follows:


The following table is an average delay for traditional RDMA RDMA peer initiated when making a comparison of the results obtained from RDMA mode of fio:


RDMA and NVMe two technologies are at the stage of vigorous rise, RDMA can provide long-distance, large-scale low-latency and high efficiency of data movement, and NVMe is able to provide low-latency access to the SSD. The combination of the two technologies to achieve exceptional performance.

No comments:

Post a Comment

Popular Posts